Shop OBEX P1 Docs P2 Docs Learn Events
Radio kill switch in Intel CPU — Parallax Forums

Radio kill switch in Intel CPU

edited 2010-12-26 10:13 in General Discussion
Intel's Sandy Bridge processors have a remote kill switch

http://www.techspot.com/news/41643-intels-sandy-bridge-processors-have-a-remote-kill-switch.html

There is nothing wrong with your television set. Do not attempt to adjust the picture. We are controlling transmission. If we wish to make it louder, we will bring up the volume. If we wish to make it softer, we will tune it to a whisper. We will control the horizontal. We will control the vertical. We can roll the image, make it flutter. We can change the focus to a soft blur or sharpen it to crystal clarity. For the next hour, sit quietly and we will control all that you see and hear. We repeat: there is nothing wrong with your television set. You are about to participate in a great adventure. You are about to experience the awe and mystery which reaches from the inner mind to... The Outer Limits.
— Opening narration – The Control Voice – 1960s

Comments

  • Mike GreenMike Green Posts: 23,101
    edited 2010-12-19 14:39
    So? This is the way most large organizations (including governmental ones) work whether you like it or not. If it's done well, it can be useful. If it's done poorly, it can provide an easy backdoor for hackers. Either way, it interferes with any semblance of privacy that you don't really have when you work for a big company. Get used to it.
  • RobotWorkshopRobotWorkshop Posts: 2,307
    edited 2010-12-19 14:49
    That really seems like a bad idea. It should be good for AMD and their business should pick up!
  • lardomlardom Posts: 1,659
    edited 2010-12-19 15:08
    Chuckz, i guess you remember "The Twilight Zone"?
  • Mike GreenMike Green Posts: 23,101
    edited 2010-12-19 15:12
    It depends. In hospitals and clinics where a computerized medical record system is used, there are hundreds of computers, usually running Windows. Things happen to them including bugs in Windows, bugs in the medical record software, etc. It's a lot to expect the non-IT staff to have any idea of what to do to recover. This kind of "out-of-band" access to the hardware at a low level can save time. It might take 15 minutes or more to get from IT to wherever the computer is located if the complex spans a city block. With this kind of access, someone who knows what they're doing can fix it pretty quickly (if you can get them on the phone). On the other hand, if its your own computer or you work in a small company with pretty knowledgable people or if you're running a Mac, you don't need a centralized, regimented IT department. The whole philosophy of system maintenance is different and this kind of solution doesn't fit.
  • Martin_HMartin_H Posts: 4,051
    edited 2010-12-19 15:42
    I foresee a massive CPU blackout due to a security flaw, a hacker, and a broadcast death packet of doom. In the post Stuxnet world, it might even be a foreign government looking to take out another country's infrastructure.
  • rod1963rod1963 Posts: 752
    edited 2010-12-19 16:25
    Wow, that really stinks, Big Brother with a kill switch not to mention a blackhatters dream. Imagine a dialysis clinic or major hospital where all the computers are Sandybridge and some IT idiot turns them off - you have a medical disaster on your hands not to mention lawsuits against the IT staff, the hospital, etc.

    Come to think of it, it would make a great tool to terrorize businesses and extort money from them as well.

    But it also proves that even very bright people can be total idiots.
  • Clock LoopClock Loop Posts: 2,069
    edited 2010-12-20 03:49
    So this is the best "tech" a company as big as intel can come up with?

    We are almost in 2011 people, think back 10 years and tell me what INTEL has done over that amount of time.

    For how much money, resources, and tech they have, we should have optical cores by now. (or at least optical interconnects)

    They are milking you all for every ounce of flesh you still have left.
    And putting kill switches in the slaves computers.

    First Intel did this with the unique ID. They tried that one a while ago.


    You are all truly living in a matrix.


    P.S. a very little company in CA, can make an 8-core 80mhz chip for 8bux.
    A massive company Intel can only make a slightly faster heater.
  • Heater.Heater. Posts: 21,230
    edited 2010-12-20 04:08
    Clock loop,

    That's all sounds very conspiratorial.

    Don't forget that if Intel were to be sitting on their butts "milking" us and not pushing forward them AMD would soon taking the upper hand.
    Outside of the PC world ARM processors are catching up with CPUs running at a gighertz now. So Intel has a lot of competition and will have more in the future.

    If the optical interconnects and such that you dream of were so easy, or even desirable in a cost sensitive market then I'm sure someone would have been working on it.

    Having said that I do agree, a kill switch in a CPU, what an insane idea.
  • edited 2010-12-20 04:56
    Mike Green wrote: »
    So? This is the way most large organizations (including governmental ones) work whether you like it or not. If it's done well, it can be useful. If it's done poorly, it can provide an easy backdoor for hackers. Either way, it interferes with any semblance of privacy that you don't really have when you work for a big company. Get used to it.

    There may be some benefits but there are security risks involved with it. This is an imperfect world and some people don't always have our best interests in mind.

    I've had tech support take control of my screen by remote control over the internet and watched them snoop.

    There was also a news article about a technician (won't name the well known company) and he allegedly copied a customer's naked pictures and posted them online.
  • Roy ElthamRoy Eltham Posts: 3,000
    edited 2010-12-20 08:40
    I think you guys are over reacting. The CPU only has a kill switch feature. The motherboard has to have something on it to make it "remote".

    Reading the actual feature details at Intel's site explains what it actually is (part of their vPro and AMT stuff), and that it's something that has to be specifically built into the machine (motherboard, chipset, bios) and that you have to setup a service (or subscribe to an existing third party one) to even make it work.

    It's likely to only be in laptops and corporate workstations and servers.

    You guys should do a little more research on things, instead of trusting some random blogs and the comments on them.
  • edited 2010-12-20 08:57
    Roy Eltham wrote: »
    I think you guys are over reacting. The CPU only has a kill switch feature. The motherboard has to have something on it to make it "remote".

    Or there could be a software fix via malware:

    (Quote)The new way: Covert remote access

    Intel's preferred solution today is to have a PC equipped with an Intel Core 2-based processor, Q45 chipset and an 82567LM network chip. This combination of components allows covert remote access via something Intel calls vPro. And, it's built right in.(Endquote)


    http://www.tgdaily.com/hardware-opinion/39455-big-brother-potentially-exists-right-now-in-our-pcs-compliments-of-intels-vpr

    (Quote)In truth, these abilites may or may not exist today in vPro. I doubt we'll ever know for sure because if they did Intel wouldn't want to publish that information. And to be sure, I'm not saying these abilities do exist. Let's be clear about that. But, the possibility of them existing is definitely there and that's the point of this opinion piece. As a point of fact, it wouldn't even be difficult to implement these abilities being discussed. It would be a mild extension to the incredible footprint of existing technology already in the CPU, chipset and ethernet controller.(Endquote)

    http://en.wikipedia.org/wiki/Intel_vPro
  • Roy ElthamRoy Eltham Posts: 3,000
    edited 2010-12-20 09:14
    Seriously?!

    I give up, you people will believe anything you want.
  • rod1963rod1963 Posts: 752
    edited 2010-12-20 11:14
    Well if any corporation adopts this tech, they deserve whatever malicious hack befalls them. It's just more stupid and dangerous tech created by control freaks. Because this sort of tech is ripe for misuse.
  • Ron CzapalaRon Czapala Posts: 2,418
    edited 2010-12-20 11:39
    As the article points out - this a poor solution. Swapping the hard drive into another computer or external USB/Firewire enclosure lets a thief access the data. The data is the valuable asset - not the hardware!
  • Clock LoopClock Loop Posts: 2,069
    edited 2010-12-20 11:45
    One would think that Intel could come up with better ideas than a remote cpu kill.

    Optical interconnects are nothing new, heck they make them in optoisolators, and they have become quite advanced, cheap, miniaturized and lazer like in intensity.

    But the only thing we get from intel is more cutesy space men in colored outfits dancing around, while they pack our chips with unique ID's and kill switches.

    No 5ghz cores, no 10ghz cores. no 512bit cpus. etc... no cpu optical interconnects either intra-cpu or cpu to motherboard (or even to ram)

    Don't give me this moores law this, price that.... We have been stalled in cpu speeds for years, and no one is asking why, everyone is quick to blame it on thermal, or process issues..

    Bull.. Intel has been talking about nano- thermal channels intra-chip for many years now...
    Its all garbage to keep us buying sub 4ghz processors, old tech, and limited abilites, with tracking IDs and kill options?

    Give me that 512bit, 50 core, 10ghz cpu already.. And don't tell me its not possible, I'll design the damn thing for you, but you will need to give me the same access to money that intel has.
  • Invent-O-DocInvent-O-Doc Posts: 768
    edited 2010-12-21 02:38
    This is an emerging requirement for government computers. They already have remote wipe on all the phones. I don't care as long as the consumer versions I purchase do not have this feature.
  • Kevin WoodKevin Wood Posts: 1,266
    edited 2010-12-24 22:40
    Roy Eltham wrote:
    You guys should do a little more research on things, instead of trusting some random blogs and the comments on them.

    You must be new to the internet.

    :)
  • kwinnkwinn Posts: 8,697
    edited 2010-12-25 21:00
    Re: One would think that Intel could come up with better ideas than a remote cpu kill.

    > Optical interconnects are nothing new, heck they make them in optoisolators, and they have become quite advanced, cheap, miniaturized and lazer like in intensity.

    ** There is quite a bit of difference between having a led IR source activate a single transistor or triac and using IR/light for interconnecting millions of individual transistors.

    > But the only thing we get from intel is more cutesy space men in colored outfits dancing around, while they pack our chips with unique ID's and kill switches.

    ** I agree their marketing may need some work, and unique ID's and kill switches have limited appeal (no appeal at all for me), but there may be situations where they are useful. As long as having them enabled is optional I have no problems with it.

    > No 5ghz cores, no 10ghz cores. no 512bit cpus. etc... no cpu optical interconnects either intra-cpu or cpu to motherboard (or even to ram)

    **Going beyond 32 bit CPU's involves trading off code density for mips/flops. Look at the speed/code density difference between a spin and PASM program on the propeller for an example of this.
    ** CPU to motherboard optical interconnects face even greater difficulties than on chip interconnects do. Circuit traces connecting pins to a packaged chips are a well understood technology. How do you connect hundreds of optical signals to a motherboard and then to other chips?

    Don't give me this moores law this, price that.... We have been stalled in cpu speeds for years, and no one is asking why, everyone is quick to blame it on thermal, or process issues..

    ** CPU clock speeds ARE stalled due to power dissipation limits inherent to the semiconductor technology. Granted, the heat can be removed by several methods and those chips can then be clocked at higher speeds, but doing so is expensive, and price is an important consideration for most of us.

    > Bull.. Intel has been talking about nano- thermal channels intra-chip for many years now...
    Its all garbage to keep us buying sub 4ghz processors, old tech, and limited abilites, with tracking IDs and kill options?

    ** Intra-chip thermal channels may distribute the heat evenly over the chip, but that heat still has to be removed from the chip somehow.

    > Give me that 512bit, 50 core, 10ghz cpu already.. And don't tell me its not possible, I'll design the damn thing for you, but you will need to give me the same access to money that intel has.

    ** Extending the number of bits may help solve problems in some ways, but it creates others as well. How many problems need 512 bit floating point/integer math or other instructions? How do you access 8 or 16 bit values in that 512 bit block of data? As for the 50 core CPU, read up on the problems, tradeoffs, and limitations Parallax has encountered with the design of the Prop II. Same thing applies to Intel. I have serious doubts that you could do what you claim even with access to the money and resources that Intel has. Like a lot of things in life it looks a lot easier than it is. Particularly when you are looking at an individual or organization that is very good at what they do.
  • Clock LoopClock Loop Posts: 2,069
    edited 2010-12-26 06:26
    kwinn wrote: »
    I have serious doubts that you could do what you claim even with access to the money and resources that Intel has.

    Yea, I'm a big dummy. What WAS i thinking? You know me best, Mom.
    (as i climb back into my box of limitations and restrictions)
  • mctriviamctrivia Posts: 3,772
    edited 2010-12-26 08:48
    Going over 4ghz really does not make sense. If programmed right a 4 core 4 ghz computer will run faster then a 16ghz single core and use a lot less power and generate a lot less heat.

    Even going to 64bit over 32bit really only improves math heavy software. Office gets no added improvement.

    Optical channels are not inherently better unless the computer itself is optical. Copper handles 10 ghz just fine and Intel has even done tests at 256ghz inter core
  • kwinnkwinn Posts: 8,697
    edited 2010-12-26 10:13
    Clock Loop, if you feel I was implying you were a dummy then please accept my apologies. Nothing could be further from the truth. I put " I have serious doubts " there because you may be the next winner of the Nobel prize in solid state physics for all I know. I just wonder if you realize the complexity and magnitude of the task. How many multi-million transistor chips have you designed?

    What I do know is there are a lot of technical and economic difficulties involved in any approach to increasing CPU power.

    I am not sure if a 512 bit wide instruction computer has been built but VLIW computers have already been built and sold. Judging by the lack of information about them I would guess they were not a great success.

    There are also GPU chips available with multiple CPU's. Not sure how many are on the largest chip but I beleive one chip has at least 128 cpu's in a pipeline.

    The tradeoff for the VLIW systems is the high cost and limited range of problems it is suited to solve. For the GPU's the cost is reasonable but it is optimized for graphics processing and the limited range of problems that can make use of that architecture.

    The Intel and AMD chips have the advantage of being well established, available in large numbers, reasonably priced, powerful enogh for solving a wide range of problems, and can be combined to produce large paralel processing systems. They also have a huge base of software available.
Sign In or Register to comment.