intel compute card - practical or fluff ?
jmg
Posts: 15,173
Has anyone seen IO specs on the latest intel release....
http://www.intel.com/content/www/us/en/compute-card/intel-compute-card.html
The web page makes all sorts of vague claims, but google cannot find anything solid about these claims
Simplified Design
Save on engineering and design costs with a standardized I/O interface designed to support multiple devices.
Connect How You Want
Includes built-in integrated Wi-Fi and Bluetooth® wireless connectivity.
google does find this
Intel says that the card uses a variant of the USB-C port called "USB-C plus extension" to connect with the systems it's plugged into. That connector gives devices direct access to the USB and PCIe buses as well as HDMI and DisplayPort video outputs.
but normal USB-C does all those only one at a time, from my understanding.
and finds this
https://newsroom.intel.com/newsroom/wp-content/uploads/sites/11/2017/01/intel-compute-card-fact-sheet.pdf
which says
"Connection to devices will be done via an Intel Compute Card slot with a new standard connector (USB-C plus extension)"
How many pins on this 'new standard connector' ?
It all looks rather removed from 'real time' and rather like stone soup....
http://www.intel.com/content/www/us/en/compute-card/intel-compute-card.html
The web page makes all sorts of vague claims, but google cannot find anything solid about these claims
Simplified Design
Save on engineering and design costs with a standardized I/O interface designed to support multiple devices.
Connect How You Want
Includes built-in integrated Wi-Fi and Bluetooth® wireless connectivity.
google does find this
Intel says that the card uses a variant of the USB-C port called "USB-C plus extension" to connect with the systems it's plugged into. That connector gives devices direct access to the USB and PCIe buses as well as HDMI and DisplayPort video outputs.
but normal USB-C does all those only one at a time, from my understanding.
and finds this
https://newsroom.intel.com/newsroom/wp-content/uploads/sites/11/2017/01/intel-compute-card-fact-sheet.pdf
which says
"Connection to devices will be done via an Intel Compute Card slot with a new standard connector (USB-C plus extension)"
How many pins on this 'new standard connector' ?
It all looks rather removed from 'real time' and rather like stone soup....
Comments
EDIT: Phil, now you must upgrade.
WTF, yet another frikken connector that will require more adapters to do anything with.
I swear I have more wight and cost buried in connectors and adapters in this house than actual compute power.
I could see use for it as computer brain for whatever vending-machine, slot-machine, cnc-machine
that you now started to develop with the feature of easy swap out in case of failure/upgrade.
Hey, I missed that snippet of news about Intel building ARM SoCs.
Sounds a bit shocking but Intel has already been building ARM processors for a long time. They have held a licence to do so since 1998 when they bought StrongARM from DEC. Then they had the XScale in 2002. Finally selling XScale to Marvel in 2006.
I'm not sure it's all about mobile. What with ARM trying to get into the server world. People like Google are looking to get away from those big power hungry x86 servers.
Here be an announcement of the Intel ARM program:
theverge.com/2016/8/16/12507568/intel-arm-mobile-chips-licensing-deal-idf-2016
Intel of course already makes ARMs and FPGAs, via their Altera purchase.
Yep.
Here is my vision of the future:
Big players like Google, Amazon, Facebook, want to get away from the power sapping x86 which costs them money in electric bills.
Other players, especially small ones, want to get away from ARM and all it's licensing hassle.
Governments around the world want to get away from dependence on corporations in the USA.
Enter the open RISC V architecture. Scaleable from embedded SoCs to 128 bit server class machines. Sponsored by Google and many others. See above.
All these players start to realize that it's better for them to work together and develop their own processing platform. They could perhaps adopt RISC V as much as they gather around Linux and Free and Open Source software. To their mutual benefit.
A monster like Google could get a RISC-V made as surely as Apple gets custom ARM designs built. A monster like Google could push and Android phone based on RISC V.
Result: ARM is out of business. Intel is just a chip fab like TSMC or whatever.
Yeah, OK, I know, I'm dreaming...
I don't think that is quite where the issues are.
Problems today are less about any core used, and more about process engineering and data flow.
Servers have moved on from simple MPUs, and they focus on packet throughput, which means smarter memory and widely parallel processing - eg, for most servers, you do not need floating point maths, but you do need parsing.
This is also why FPGAs matters, and it can actually result in LESS power, if you have FPGA type distributed routing decisions.
This puts intel in the box seat.
Let's just say I'm not convinced.
FPGA's are great for that custom, high speed, bit and logic twiddling you might want to do in your boutique product. 25 years ago I worked on a project where they used a "sea" of FPGAs to develop the logic for a radio modem. That ended up being built as an ASIC. Or think the FPGA's used in modern digital oscilloscopes and such,
When it comes to mass deployment, in the worlds server farms or mobile devices or whatever, then things get standardized and end up being built into silicon directly. Again the FPGA loses out at scale. Making the thing in silicon directly will be cheaper and more power efficient than the FPGA.
I think of FPGA as a dev tool. As we see it being used in P2 development. Or the RISC V for that matter. Mostly there is no way it ends up in the end product. Except those niche high markup situations.
The FPGA provides the routing fabric, and intel can apply as much, or as little, FPGA as needed to make those DSP arrays hum...
Those Server-Embedded FPGAs are unlikely to look like your grandfather's FPGA.
Exactly. "already 'made in silicon directly'"
I'm till not convinced.
For any given logic design I one can come up with one can:
a) Build it as a bunch of 10nm transistors.
b) Build it as a bunch of huge logic cells, each of which is made of hundreds or thousands of 10nm transistors.
Not to mention the die space taken in a custom pattern for my design vs using a huge grid of the FPGA.
Seems to me that for any large production run the "real" silicon solution will always beat an FPGA configuration for cost and power.
For a long time now I have noticed that FPGA solutions get so complex that their designer blow a CPU core into the thing to so they can manage everything in software. The natural extension of this is that FPGA vendors start to build "real" CPU's into their FPGAs to save space, cost and power.
Now, perhaps there is a case to be made for the "on the fly" reconfiguration of your FPGA. Is that really a thing people do?
My grandfather would have loved an FPGA. If he had ever heard of digital logic. I come from a long line of nerds like that
As always, it depends...
You also need to Add in the cost of a revision, at that 10nm node.
Then factor in the early-access benefits of modest amounts of FPGA fabric
There are reasons even high speed USB parts, often have a small MCU in them for configure and test.
It is the flexible nature of that design that pays in the long run.
Likewise, most MCU parts today are Flash, even tho I can claim to always manufacture ROM cheaper.
Q: If ROM can always beat Flash for Cost and Power, why would anyone buy Flash ?
A: Customers will pay for design longevity and flexibility.
I've seen these at Radio Shack. (It was trying to demonstrate itself by showing a video of what they sell.) Running Win10, and two at Micro Center. There it was both Windows and Linux....
Oh and your robots should be home in two weeks erco. And they leave for a walking tour of Asia in six weeks.
(fwiw - my company sells products which incorporate an FPGA. Because it's so special. And the FPGA makes it a very expensive product.)
As for that Intel product, I've looked at it again and again and I can't see what it's for. Or at least not what I couldn't use a R-Pi for instead, with, I bet, much less overhead to get started.
The compute stick though, that one I got completely. This one I don't.
With something like Asymmetric Multiprocessing (AMP), which is used in areas such as Automotive, you can use a SoC such as Xilinx Zynq which has 2 ARM Cortex A9 cores with each Core running it's own operating system such as Core1-Linux, Core2-FreeRTOS and share message between the two cores as well as share resources such as DDR Memory between the two. So, one core could gather user input and the other core could be used to control the lower level device control.
Heck, Xilinx even provides a tool where you can program an FPGA from C or C++ Source code with SDSoC. FPGAs and the tools to program them have come a long way in recent years and their use is expanding greatly.
There is always a price variation, the key part is deciding what customers will pay (just like ROM is always cheaper than Flash, if you cost only silicon)
Quite small MCUs today (sub $1) include Configurable Logic peripherals, that are like CPLD/tiny FPGA Logic Elements.
Larger PSoC MCUs (sub $2) from Cypress have more comprehensive Logic Peripherals(UDB), so there is already a continuum out there for Logic + Micro, right up to the big ARM+FPGA from Altera/Intel.
You mean something like the ZynqBerry?
https://hackaday.io/project/7817-zynqberry
I'm sure it will increase the price of the board, but things are getting to where it would be reasonable.
There are cheap FPGA options such as Mojo, Papilio One, and Elbert 2 boards
Also, there are other plugin FPGA modules for the RasPi as well as the BeagleBone Black.