Tiny machine learning chips

https://semiwiki.com/ip/eta-compute/282766-tinyml-makes-big-impact-in-edge-ai-applications/

It would be neat to understand how these machine-learning principles work. Apparently, these guys are using a DSP and some novel software approach to perform convolutional neural network operations efficiently. The P2 could do this, too, maybe not as quickly and certainly burning more power, but it could be made very simple to approach and integrate into user applications.

Comments

  • An oxymoronic statement in the two opening sentences:
    Machine Learning (ML) has become extremely important for many computing applications, especially ones that involve interacting with the physical world. Along with this trend has come the development of many specialized ML processors for cloud and mobile applications.
    Cloud and phones are probably the least real parts of our world.

  • evanhevanh Posts: 9,041
    edited 2020-02-16 - 23:07:30
    The DSP will be doing the usual signal processing. So that leaves the plain old ARM processor for the Machine Learning. It'll be a wide question on just what that covers I suspect.

    EDIT: I guess "continuous voltage and frequency scaling (CVFS)" is an efficient substitute for floats. Sounds like they've written a library.

  • What I gathered was that it's a software process. Their memory isn't any bigger than ours. We can do whatever they are doing, but in maybe a simpler and more flexible context.
  • At their core, these things use various Relu functions.

    Perhaps the cordic could do some of the these, it would be interesting to know what ones, but doing them in LUT ram is probably faster.

    You need a lot of number crunching - with the Donkey cars even the Raspberry Pi 3/4 is a bit slow for the training phase, and people copy the data onto their laptops or remote server to crunch the training data, then bring the inference data back to Donkey car to run around the track. The jetson nano seems to be fast enough to do the training onboard the car, however.


  • Cluso99Cluso99 Posts: 15,865
    edited 2020-02-17 - 03:41:55
    Chip,
    At work we are about to embark on using machine learning in an attempt to predict some future results. We have been in the data gathering phase in order to use past history to predict the future. Python is the major language used. There are lots of libraries out there for Python although I have no idea at present if these are useful.
    As I learn more, I'll let you know as much as I am permitted to divulge.

    The one thing is that there is a lot of data, and that data is processed to learn trends (ie calculate all sorts of statistics such as moving averages etc), and then apply those trends going forward.
  • jmgjmg Posts: 14,289
    cgracey wrote: »
    It would be neat to understand how these machine-learning principles work. Apparently, these guys are using a DSP and some novel software approach to perform convolutional neural network operations efficiently. The P2 could do this, too, maybe not as quickly and certainly burning more power, but it could be made very simple to approach and integrate into user applications.

    AI is the new buzzword, everyone rushes to tag that to their offerings.
    I find it better to drill past the marketdroid fluff and look at the basic silicon

    eg this is interesting
    https://www.eetimes.com/xmos-adapts-xcore-into-aiot-crossover-processor/#

    The Xcore.ai chip delivers up to 3200 MIPS, 51.2 GMACCs and 1600 MFLOPS. It has 1 Mbyte of embedded SRAM plus a low power DDR interface for expansion.

    and they show tasks mapped onto their 'cores', so Parallax could do something very similar.
    Less clear is if the magical '$1 in volumes' part they talk about, includes HS-USB and 1M SRAM and 16 'cores' ?
Sign In or Register to comment.