Shop OBEX P1 Docs P2 Docs Learn Events
Fill the Big Brain - Page 32 — Parallax Forums

Fill the Big Brain

1293032343559

Comments

  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-11 08:14
    Mike G wrote: »
    Humanoido, I targeted the newer stuff, OpenCL. You might want to take a closer look at the source you posted and do some research yourself :) Secondly, how is OpenGL, or OpenCL in this case, related in any way to connecting a Propeller to a video card? How does the data go from the Prop to the video card and back?

    That was a typo. Go for OpenCL.

    The most recent collection of sources posted was intentionally dated to take advantage of the pre-existing card owned by Dr. Mario. Granted, this is replaced by numerous other cards, but it's entirely advantageous to use what we already have. With the large number of streaming processors, this is a fine board for the apps which we consider.

    I see Dr. Mario answered your other questions.
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-11 08:20
    Dr. Mario wrote: »
    And, there's some problem: If we should use PCIe x4, we may need maybe 120 Props to satisfy the throughput requirements. PCIe x1 should be no problem to try and snake the data throughout, while PCIe x16 is a huge problem (will need 320 Propeller I / 24 Propeller II to keep up with that kind of bandwidth - at least if we use Cyclone III FPGA to cut out our jobs, that would solve the bandwidth problems - and beware: Altera Quartus II tools are HUGE [I am glad I got 500GB hard drive here. -___-;] and I have it installed on my workstation already.)
    I would recommend using the highest bandwidth in communicating with the host, and for the Propellers, maintaining their highest throughput. In between, buffer. I was thinking about having two pools of GPUs and Propellers. It doesn't matter if each pool fills at a different rate, as long as the sum of the individual pools are full capacity.
  • Dr. MarioDr. Mario Posts: 331
    edited 2011-06-11 08:31
    Humanoido, last of your post is entirely a good point.

    First, I would just go with FPGA for torturous jobs, leaving Propellers happily talking to the GPU chip without upsetting them in a way (such as having to muscle up on fairly complex PCIe initialization commands and some whizbangs required to keep PCIe hosts happy) - all in that, Cyclone III would simply take care of very messy detail. Granted, I have been planning to use the FPGA in Dendou Oni.

    Second, what I have been noticing: When screen goes black, nothing else (such as when booting up the firmware [the BIOS] ), the bandwidth could be anywhere from 5 to 65 MB/s which is considered brain-dead simple to keep up with compared to VESA-compatible ISA cards which required to keep the constant flow of data to keep FIFO full. When 3D graphic or all of scientific-class supercomputing takes place, on the PCIe x16, it could hit 128GB/s. In other hands, AMD Radeon HD series GPU doesn't care as long as the GPU firmware (AtomBIOS) is happily informed that the processor's still on host bus.
  • Mike GMike G Posts: 2,702
    edited 2011-06-11 08:46
    Humanoido wrote:
    That was a typo. Go for OpenCL.
    Ah the ubiquitous typo
    Humanoido wrote:
    I see Dr. Mario answered your other questions.
    No, Dr Mario did not answer the question and neither did you. Classic Humanoido dodge. Just curious, did you even read the the so called datasheet?
    otherwise, how would we use the GPU to re-encode our home video? =)
    Apples and Oranges

    Dr. Mario wrote:
    And running OpenCL on Propeller II is doable
    Seriously?
    Dr. Mario wrote:
    OpenCL is supposed to be barebone (very small set of library - all by itself).
    It's supposed to be very small or it is very small? The OpenCl.dll is 50k.
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-11 09:05
    Dr. Mario wrote: »
    Humanoido, last of your post is entirely a good point. First, I would just go with FPGA for torturous jobs, leaving Propellers happily talking to the GPU chip without upsetting them in a way (such as having to muscle up on fairly complex PCIe initialization commands and some whizbangs required to keep PCIe hosts happy) - all in that, Cyclone III would simply take care of very messy detail. Granted, I have been planning to use the FPGA in Dendou Oni.

    We're on the same page. I would also leave those details up to the Cyclone III if I was running PC. The principle is the same with Mac. Luckily, AMD has tailor suited the card to Pro Mac Bus and with several open slots, a powerful system could be put in place.
    Dr. Mario wrote: »
    Second, what I have been noticing: When screen goes black, nothing else (such as when booting up the firmware [the BIOS] ), the bandwidth could be anywhere from 5 to 65 MB/s which is considered brain-dead simple to keep up with compared to VESA-compatible ISA cards which required to keep the constant flow of data to keep FIFO full. When 3D graphic or all of scientific-class supercomputing takes place, on the PCIe x16, it could hit 128GB/s. In other hands, AMD Radeon HD series GPU doesn't care as long as the GPU firmware (AtomBIOS) is happily informed that the processor's still on host bus.
    This is what I like about the AMD. You get the full speed to the host from the GPUs as evidenced in the graphics. Computational power is regulated and kept within the host and will be at a very high level, several TeraFLOPS is possible with straight off-the-shelf configurations.
  • Dr. MarioDr. Mario Posts: 331
    edited 2011-06-11 09:32
    Yep. It's one of AMD Radeon's power-saving feature: It was programmed to live with what it got, without sucking up the electricity dry (which would stress the PSU as well).

    That's a very good reason to like AMD Radeon HD series (the reason Apples stuck with it until mid '00s) - they are also still used in embedded system (such as Freescale's MPC8640D demo board).

    I got the APP tool - will be playing with it a bit.

    Mike G, it really depends on if you are going to use the feature or not - the compiler will decide. And also Propeller II will boot from SPI, so the maximum size for SPI flash is about 64MB (hard to find, but 1 - 4MB flashes are more common) - I may include SDRAM on my yet-to-be-designed Propeller II board so that way it will have room to put the data on and hold it off until needed, so there's no reason why it's impossible.
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-11 14:39
    Dr. Mario wrote: »
    That's a very good reason to like AMD Radeon HD series (the reason Apples stuck with it until mid '00s) -.
    Apple stayed with the AMD Radeon HD Series - excellent decision on their part - and even now the newest mid to top-of-the-line Apple Macs all use it. That's where I'm getting the first one where it's a pick & choose from mid to upper range AMD. It's signed, sealed, delivered and plug n' play. It doesn't get much better than that.
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-12 19:25
    Combo Program Era

    This marks the beginning of the Big Brain's new Combo Program Era, or CPE, when various code will be designed or combined to enable greater functions. This is now ongoing. Still in prerelease idea phase, the combinations to be included are of three fundamental forms.

    1) At Once Distribution

    In the past, distribution had taken place with slow methods. This is no longer the case.

    2) Lightning Enumeration

    In the past, enumeration used hardware and software to achieve enumeration, which was ok for small numbers of processors, was exceedingly slow for large numbers of processors. This has all changed. LE or Lightning Enumeration should handle over 100 Propellers in less than one second.

    3) Collaborative Cloning (CC)

    CC is a new topic, at least in the known hobby field, and has not been previously discussed. It will be addressed in the future as global development progresses.

    CPE addresses software. The philosophy of Big Brain's software modules is to begin small - it's likely no shared subroutines will be easily found. This could change in the future where CPE with more elements as a whole can be reduced in code. However, the objective thus far is fundamentally small and single purposed exampling code.
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-12 19:37
    Combo Wiring Era

    It was anticipated the Combo Program Era, CPE, would synonymously result in a Combo Wiring Era or CWE. This is now what has happened. Recently wiring was added to the basic brain design for types of communications. Add-on blue wires run port arrays for high-speed-results of specific purposes such as iso-prop-to-prop and the idea is to include intermixing forms of HYBRID communications (abbreviated HY).

    Often more than not, these functions occur at different times across their wiring, and thus the wiring can be shared and mixed with other uses, utilities, and functions. This is no different than multi-purposing Propeller pin 32 after loading.

    HY is still being looked over and at the current new level of complexity, the napkin sketch is no longer effective or useful. It's hoped the new CAD design software purchase and new computer purchase will happen soon, so we can get back to design release and updating.

    Here's an example of this type of wiring design sharing, with schematics, showing the AM's five designs (also see updated schematics).
    http://forums.parallax.com/showthread.php?124433-Tiny-Tester-for-Developing-Parallel-Algorithms

    What does the Combo Wiring include? Interestingly, it has the same hardware as the CPE with the addition of HY. Originally it was anticipated that HY could achieve the fastest possible speeds. It is now believed that HY could be obsoleted with At Once technology. More on At Once Technology will definitely be scoped out and developed in the near future.

    1) At Once Distribution
    2) Lightning Enumeration
    3) Collaborative Cloning (CC)
    4) HYBRID Communications
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-12 20:27
    Dr. Mario wrote: »
    Humanoido, last of your post is entirely a good point. First, I would just go with FPGA for torturous jobs, leaving Propellers happily talking to the GPU chip without upsetting them in a way (such as having to muscle up on fairly complex PCIe initialization commands and some whizbangs required to keep PCIe hosts happy) - all in that, Cyclone III would simply take care of very messy detail. Granted, I have been planning to use the FPGA in Dendou Oni.

    I thought this was your computer's motherboard but I now see it's the FPGA card. One question about it, it quotes the number of LEs or logic elements ranging from around 5,000 to 200,000. It that comparable to a Propeller's Cog in the sense of a small streaming processor that can be programmed? If so how would you characterize it with the AMD card which quotes streaming processors in far fewer numbers?

    http://www.altera.com/products/devices/cyclone3/overview/cy3-overview.html#table1
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-12 20:49
    At Once Technology in Large Multi-Prop Machines

    Announcement: Work has begun on a new concept named At Once Technology for a high speed Synapse communication system in the Big Brain.

    At Once Technology, or AO, allows transferals to take place instantly or at once. This is but one advantage of working with large multi-prop systems.

    AO has many uses beyond propagations of synapse-like response in machine cores, such as primary matter distribution, infusion, communications and function inducement and tagging such as enumeration and parameter passing.
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-12 21:23
    The Dawning of the Parallel Era

    Some old timers are quick to point out its all been done before and you're wasting your time. They're correct about many of the achievements in the past. Why reinvent the wheel?

    Probably because no one used these types of wheels. Now it's time to light the torch, now that technology and resources have caught up.

    But exactly what happened to the promises of Parallel architecture? Too complicated? Too expensive? Unavailable? Perhaps it was in the past.

    And thinking about what's happening to one program distributed and evolving in over 100, 1000 or more paralleled locations can be challenging.

    Methods of programming are vastly different from single core machines and often require parallel algorithms and new ways of thinking.

    Today, herein embeds the challenge, the excitement and the impending propensity of tremendous gains. Not unexpected, someone will cite Moore's law or how more gains result in less, but we can cite back Murphy's law that this person better stay home in bed.

    The Big Brain has made some strides in the overall scheme of things, but taking focus, the direction of parallel development is a quantum leap beyond thinking in terms of old single processing.

    It does appear that the methods of the past, such as the type of programming, the method of loading and running code, the way of communicating, the input and output style, and the methods to problem solving are all dated.

    But we agree this dated fundamental core technology has advanced to the new parallel technology emerging today.

    This changes the face of large computing and thinking machines and ...
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-12 21:42
    The Big Brain as a Parallel Machine
    Is the Big Brain a monster parallel machine?

    Yes.

    In the first designs, which may be retained, repurposed or obsoleted, we dealt with programming the execution of communications in a serial talk-no talk fashion. Some rules of engagement dealt with bosses and workers, who could speak and when. In the past months and weeks, this has all evolved and moved on.

    The Big Brain now has two sets of parallel wiring (in addition to other wiring). One set of parallel wiring connects every in-array (and in Partition) Propeller together to a common point. The second set of parallel wiring does the same but the common point does not share a connection with the first. However, both connections share a joining with all Propeller chips.

    This opens up new types of large parallel-parallel machines, programming and applications.
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-12 22:06
    Introduction of Parallel-Parallel ParP
    Is it possible? How is it achieved? What are the resultant apps?

    As previously described, the Big Brain has introduced a concept of parallel-parallel. Parallel-Parallel is not simply running two or more eight or 16-bit parallel ports at the same time. It entails a completely "paralleled" connection of the same electrical singularity or point in common.

    There are two such connections in the Big Brain with a Parallel-Parallel, or ParP. It is possible to program a ParP by dedicating a cog to each so that each runs in tandem with the other.

    Multiple ParPs can be introduced. In theory, the Propeller I could support wiring for 32 ParPs but only support software for 8. Here, the primary ParP advantage to a prop is more parallel cogs and not more memory.

    What are the many uses for ParPs? The first that comes to mind is that individual props can have simultaneous access to vast libraries at the same time. With a clocking time pump, these libraries can be periodically sent to all props at the same time and be received by all props at the same time.

    ParPs could involve massive transfers of parameters. These could be very important where param snapshots are necessary. In the case of a robot, a shutdown or brain stasis could more readily be achieved on a global massive scale.

    ParPs could involve a pre-loading, i.e. status setup or an inducement of "preloaded" brain constructs, personalities, or preconfig neural setups.

    Parps could serve as the distribution medium if a single prop is designation as the "generator."

    Perhaps the prime app of ParPs is for speed in numbers. More info will follow as devs continue.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-06-12 22:26
    Okay, so how does "At Once Technology" work? You must have a schematic and source code you can share with us now, right? And what about "Parallel-Parallel?" Details, please!

    -Phil
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-12 22:33
    Building a Larger Multiple Point ParP Machine

    In the previous post, a two point ParP machine was considered using the Big Brain' Propeller array. However, in true parallelism, up to a maximum eight point ParP is possible (per contiguous propeller chip array).

    If half the props are dedicated to an upper ParP and the other half dedicated to a lower ParP, sixteen points could be achieved. All 16 points function in tandem. This, if driven with a cog per data stream from the prop generator chip, would result in sixteen simultaneous, in true parallel, streams of data.

    Dividing the Propeller chip array an additional time could use all 32 pins and create a 32 point ParP machine for 32 streams of true parallel data.

    It would be easier to draw this in 3D as each pin has a foci. Construct an array of 32 foci and show the adjoining pins. Example, pin 31 is foci 31 and all pins 31 join in common to one foci.

    [insert schematic here]
    This representation shows a multiple ParP machine using arrays of Propeller chips and the resulting 32 dedicated processing streams
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-06-12 23:02
    Humanoido,

    I have no idea what you're talking about. Making up terms as you go along does not facilitate communication but, rather, obfuscates matters and comes across as gibberish. Please try to understand that, to connect with your audience, you have to speak the audience's language. The language of this forum is schematics and source code.

    Thanks,
    -Phil
  • LeonLeon Posts: 7,620
    edited 2011-06-13 00:56
    I haven't a clue what all that gobbledegook means, either. Show us something very simple like an array of four Propellers performing a meaningful task using parallel processing, with a schematic and working code.
  • Duane DegnDuane Degn Posts: 10,588
    edited 2011-06-13 09:09
    Hunanoido, I've been content to let Phil point out weaknesses in your presentation. I thought everyone on the forum held Phil is such high regard that his objections would be enough for anyone to mend their ways.

    I will join Phil's plead (my word not his) for more substance of what is being posted. I've offered many times (and it still stands) to help with code in Prop to Prop communication etc. Sorry but I'm not interested in merging video cards with Propellers so I wont be much use there.

    One area I might disagree with Phil a little on is the need to share source code. While I think sharing source code is the easiest way of showing one's progress there are other ways albeit they would take more time. example: Showing what was inputed into a system and what the output was. I personally, am willing to post any code I write for the Propeller even if it is a work in progress. I always feel my time spent coding was more productive if others are also using the resulting code.

    I recently looked back at some posts I had read when I was first learning to program microcontrollers. I remember being easily impressed at projects that I now see were rather trivial. I think Phil, justly, feels it is important to let those new to microcontrollers know this project of yours doesn't not as yet solve any useful problem. (As many projects on this forum (including mine) also don't.)

    I've read many posts in this tread without understanding what was actually done.

    I hope Phil continues to call you out for posts lacking sufficient detail to understand what you're doing.

    I personally would like to try to get some code that would use . . . let's say five (since that was a recent simplification made for testing here) Propellers that would do something useful/entertaining/interesting and that could do it better/faster than one Propeller could do it.

    Of course this is your project and you can pursue any course you wish. I just hope you'll let the rest of us follow along with post that have more substance.

    Duane
  • Dr. MarioDr. Mario Posts: 331
    edited 2011-06-13 10:26
    Humanoido - your plan's nice. It should work out, although I am kinda hoping you may try out the IO "pinging" delay corrector, training the devices to take in IO trace size and some thermal drifting, so that way Propeller will keep their IO speed as high as possible.

    For my board, it will keep getting complicated - I am going with 50-bit packets as mentioned in my old report paper - I would use Spread Spectrum signaling (taking advantage of Propeller II's fancy IO hardware) to keep up at about 200MB/s to few gigabytes per seconds each chips (70GB/s for whole system) - with their IO being configured as Hypercube and Common Bus, thus similar to the Transputer T414s' wiring.

    At least you get it kinda easier with P8X32A, and I kind of not, largely because of IO speed and possible data delays (noises won't be a concern because of advanced frequency hopping depending on the software requests and CPU IO speeds) At least it won't be a headache because I will deploy the SDRAMs to use as Level 3 cache RAMs, either with the processors themselves or if I elect, I will put in GDDR-3 / 5 to get away with Propeller II array's screaming high speed IO, from and to the IO switch FPGA, while being able to share the common RAM resources.

    But, if you want to do the same, at least you may be able to learn much from me, even some tricks. Yet, I am even more confident that you will pull it off, considering that you have been putting in lot of money, efforts and labors of love into your research project.
  • Dr. MarioDr. Mario Posts: 331
    edited 2011-06-13 11:06
    To answer your questions:

    Humanoido - the video card's in my computer and I am using it. In Dendou Oni hardware, I would buy another card and stick on the FPGA IO switch board, along with Freescale MPC8379 processor (via its PCIe bus). Streaming Processor? The MPC8379 handles the messy details, via the FOSS driver and uCLinux MMU add-in.

    At Once... The way I see it: The Propeller chips just simply share the same copies of the machine codes and agree that their code tables are coherent and then execute them at once (it can be done just by a chip itself, all Cogs executing them at once) - and that the codes just flying around as rapidly as possible. After the codes are done, the Propellers send the resultant codes back to a main processor - in closely the same way that streaming processors on the GeForce / Radeon GPU die do its jobs.
  • Mike GMike G Posts: 2,702
    edited 2011-06-13 12:37
    Without source code and schematics, we're left with only words and no way to confirm or refute Humanoido’s claims.

    I’ve spent many hours reading these threads. I could not explain one problem that the Big Brain solves. While this thread is entertaining, I’m wondering if my time is better spent elsewhere.
  • LeonLeon Posts: 7,620
    edited 2011-06-13 12:49
    I think that most people, like us, read Humanoido's posts for their entertainment value. None of the other forums I frequent has ever had anything like his Big Brain under discussion.

    Such posts might be counter-productive for Parallax. A prospective professional user of the Propeller might come across them, and worry about his reputation if he uses or recommends the device.
  • Dr. MarioDr. Mario Posts: 331
    edited 2011-06-13 13:05
    In my own word, a can of worm.

    Yet, sometimes we don't have enough time, nor resource - a reason why I am holding my project off until I got sufficient resources...

    Not to discourage anyone, just that I am now being cautious and just use word what I just have to say.

    I will just spill the beans once I get my project set up and works great.
  • Mike GMike G Posts: 2,702
    edited 2011-06-13 13:21
    I worry more about the noobs then professional folks but who am I to judge this open forum. I figure if Parallax has concerns, they'll do something. I suppose I'll continue to see what's up with the Big Brain from time to time.

    Dr. Mario, you're highjacking the thread. This is not about you're Supercomputer project.
  • PublisonPublison Posts: 12,366
    edited 2011-06-13 13:32
    I really miss Dr. Jim and Mark. :)
    http://machineinteltech.com/blog//blog1.php
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-13 19:42
    Proof of Living Machines

    Whether a Big Brain project outing involves a trip to the electronic
    parts store or to the appliance store, signs are everywhere indicating
    the level of AI in China is very advanced. I took this photo
    when shopping electronics.

    Don't overlook the sign on the left showing robot AstroBoy. And
    you may want to check out the living AI machine on the table
    towards the bottom of the sign at left.

    attachment.php?attachmentid=82106&d=1308018211
    Proof that AI is alive in China

    As you walk through these doors, its a life changing experience.
    The machines are fantastic, they live, they do stuff and they are
    unlike anything you've seen in a store before. Unfortunately
    the temperature inside was over a hundred degrees (living
    machines don't complain) and I was too hot to continue
    photography.
    349 x 465 - 42K
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-13 20:53
    Dr. Mario, you're a goldmine of great ideas - Always feel free to post in the Big Brain thread. Your postings are all on topic and useful. We can FPGA work together comparing notes on development of streaming processors.

    I'm looking forward to trying out the IO Pinging Delay Corrector equivalent and will look into the setup after I get my board and computer. BTW, you understand the concept of "At Once" very well. I believe you are one of the leading developers/thinkers in this field.

    Regarding the Freescale MPC8379 processor (via its PCIe bus), I like the arrangement where the MPC8379 handles details, via the FOSS driver and uCLinux MMU add-in. I'll need to take closer inspection on the AMD. The handlers are already installed on the MB so one less concern.

    The Brain will probably use the 73.6 GB/s memory bandwidth, going with up to 1.008 TFLOPs compute power using the TeraScale 2 Unified Processing Architecture. That's one board only, not counting the 2nd board with Quads or props.

    The 720 Stream Processors would be a good infusion to the props, firing with AMD App Acceleration, Eyespeed, OpenCL and DirectCompute depending on the apps. And yes, there's enough GGDR for screaming high xfers.

    The idea is to have two Brain Hemispheres, which can be separated, one with the propeller arrays and partitions and one with the 720FPGAs, each can act as a completely functioning machine. But that's another topic.
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-06-13 21:10
    Multiple Machine Brain Hemispheres

    The human brain relies on a duo hemisphere arrangement, of right and left. Different processing capabilities exist for different sides of the brain. Understanding the human brain can lead to greater understanding for a machine brain. When is the hemisphere brain concept applicable to a machine brain? When arrays of processors are doing different processing or are of different types for different purposes, a hemisphere can become useful. For example, in the combination of props and GPUs, in the combinations of arrays and partitions, or in the case of definitive software that can be confined to L/R definitions.

    brainorg.jpg
    http://www.lefthandersday.com/tour2.html

    The left hemisphere (RIGHT HAND CONTROL) controls Speech, Language, Writing, Logic, Mathematics, Science, this is the LINEAR THINKING MODE.

    The right hemisphere (LEFT HAND CONTROL) controls Music, Art, Creativity, Perception, Emotions, Genius, this is the HOLISTIC THINKING MODE

    http://www.lefthandersday.com/tour2.html

    Left Brain Functions
    Time
    Logic
    Math
    Science
    Analysis
    Language
    Organization
    Writing

    Recognize
    Numbers
    Words
    Letters

    Right Brain Functions
    Genius
    Creativity
    Perception
    Spatial
    Music & Art
    Intuition
    Patterns
    Context

    Recognize
    Faces
    Places
    Ojects

    In the machine brain, a concept of L/R hemispheres does not need confinement to hardware only or software only. A combination of one or the other or both is possible. Is is also possible to create additional machine brain sections beyond the hemisphere such as a quadsphere or octosphere.

    One such proposal considered is that of three dimensional spheres and representative diffusion rings of convention that fit the machine brain model. However, this is best represented in 3D and not on flat paper. These drawings are maps best created with 3D graphical programs on a high end computer. More on this topic will follow.
  • ElectricAyeElectricAye Posts: 4,561
    edited 2011-06-13 21:11
    ... And what about "Parallel-Parallel?"...

    I just presumed it had to do with parallel processing utilizing parallel universes. Just think of the speed!
This discussion has been closed.