Shop OBEX P1 Docs P2 Docs Learn Events
Concurrent vs Parallel — Parallax Forums

Concurrent vs Parallel

«1

Comments

  • This should be moved to general discussion. Oops!
  • kwinnkwinn Posts: 8,697
    IOW, it depends on how you define concurrent and parallel. I think “parallel” is a poor choice as a descripton. In what way are they “parallel”? At best the typical multi core systems we have can co-ordinate multiple cores to work on a problem concurrently.

    The old SIMD and MIMD descriptions were more accurate, and MIMD is a much better description for the multi core systems of today. Each core executes a single instruction independently of whatever instruction the other cores are working on.
  • Heater.Heater. Posts: 21,230
    I think this is all Smile.

    Traditionally in programming we had: sequence, selection and iteration.

    Basically : "do this then do that", "if someCondition do this else doSomethingElse". "while condition doSomething"

    No matter how many cores your CPU had or how parallel the machine was to speed up your code, the language defined this simple, orderly style of programming. With predictable results.

    But what if we introduce the idea of one or more things happening at the same time into the very language?

    The we have : sequence, parallel, selection and iteration.

    So, for example:
    SEQ
        a = 1
        b = 1
        c = 0
    
    PAR
        c = a + b
        d = c + c
    
    What is the value of "d" when all this is done?

    Indeterminate. We have no idea what order those statements in the PAR block get executed in.

    It does not matter if all this runs on one processor or many.

    "concurrent = parallel" as far as I can tell.
























  • RaymanRayman Posts: 13,805
    concurrent just means things are happening at the same time.
    Parallel also implies equal processes, like Propeller cogs.
  • The difference I can see, by reading and thinking on this, is intent.

    Concurrency simply means happening at the same time. And on a uniprocessor, it actually happens in discrete task form. Some scheduler keeps all the concurrent tasks moving.

    A long while back, I used to have this discussion about Windows. It was multitasking, and multiuser, but not concurrent multiuser like Unix was. This meant the OS did support concurrent execution, but did not really offer support for concurrent users.

    A dual head Linux box, or one serving up apps to many users was concurrent multiuser, for comparison, in addition to being concurrent in its multitasking.

    Windows has since been upgraded and does support meaningful concurrent multiuser capability via things like RDP, etc...

    None of this involves parallel just yet. Still a uniprocessor scenario.

    As you have mentioned, a sufficiently fast CPU and tasker can appear to do what multiple CPUs working in parallel can do.

    At first this appears to blend these two ideas into one, leaving semantics.

    However, when intent is considered, there remains a distinction.

    Say we have multiple compute units and they are all general purpose and they all execute at the same time. That is a true multiprocessor, and it differs from a uniprocessor in that multiple tasks may be performed at the same time, independently.

    Also say we have multiple compute units that are more limited, but the same in every other way. GPU shaders are a good example of this. They too can perform tasks at the same time, independently, but not general purpose tasks.

    A multiprocessing PC has multiple general purpose general purpose processing units, and if it is equipped to do graphics, also has a number of specialized compute units as well.

    Intent matters here.

    For the specialized GPU shaders, the intent is parallel, in that one task, say rendering a display effect, gets well distributed among the compute units to get that task done more quickly than we find to be possible using a single compute unit.

    The general purpose processing units may also be used in this way, but do not have to be in order to see the benefit of multiprocessing. They may be computing on many different users, tasks, and the intent behind those may or may not have some dependency, or similarity, goal, etc... this is concurrency, hardware assisted by a multiprocessor.

    In short, parallelism is a subset of concurrent, in that the intent is to get a task that can be distributed onto multiple compute units, done faster than is possible with a single compute unit.

    The more general notion of concurrency simply means being able to to more than not task, or follow more than one intent at the same time if you will. There is no requirement for the work to be related in any way, only that all work can proceed, not be done sequentially.

    A concurrent multiprocessor can be used in the parallel computing way, a task related, aligned and distributed to get done faster, or in the concurrent way, where lots of tasks are getting done faster, but aren't necessarily aligned and distributed in ant meaningful way.

    How to better and more generally apply the idea of parallel processing is being studied as it can help more tasks conplete more quickly when multiprcessing is available, which woyld improve performance over concurrency, in theory anyway. That is the whole, "how do we get more out of multiprocessing?" type discussion that has gone on for a long while now.

    We understand concurrency pretty well. Lots of processors can be doing lots of tasks fast, but it is considerably harder to get lots of processors to complete a single task faster, which is parallel processing.

    The P2 is a great machine in both respects. As a concurrent multiprocessor, it can execute a lot of different tasks at the same time, contains hardware support for shared memory, etc.... and it can be used to perform a single task faster as a parallel processor.

    I like to think of a COG as being a lot like a GPU shader when it is running COG code, and when the same COG code is running on a lot of COGS to get a task done quicker, the P2 is being used as a parallel processor.

    A COG, or few COGS running HUB code, and or just performing a task, say to act as a peripheral, is an example of the P2 being used as a concurrent multiprocessor.

    One last distinction...

    With parallel, more than one compute unit is implied for the task at hand to be accomplished by parallel processing.

    Concurrency has no such implication.

  • RaymanRayman Posts: 13,805
    This wiki page explains the difference (assuming it's correct):
    https://en.wikipedia.org/wiki/Parallel_computing
  • I'm largely happy with that.
  • RaymanRayman Posts: 13,805
    edited 2016-01-22 00:20
    The concurrent page also explains the difference, maybe better:
    https://en.wikipedia.org/wiki/Concurrent_computing


    A single core processor can do things concurrently, but not in parallel.
  • potatoheadpotatohead Posts: 10,253
    edited 2016-01-22 00:26
    We can do parallel processing on the P2, but a lot of the compelling use cases really are hardware assisted concurrency in nature.

    I like the term concurrent multiprocessor because it's more inclusive of all the Chip can do.
  • AribaAriba Posts: 2,682
    Heater. wrote: »
    ...
    So, for example:
    SEQ
        a = 1
        b = 1
        c = 0
    
    PAR
        c = a + b
        d = c + c
    
    What is the value of "d" when all this is done?

    Indeterminate. We have no idea what order those statements in the PAR block get executed in.
    ...
    With FPGA "programming" in Verilog or an other HDL this must be defined. On an FPGA all happens in parallel (or concurrently?).

    So if c and d are clocked register and clocked from the same clock:
    d = c + c will use the value of c before a + b gets assigned.

    If c and d are just labels for wires in a combinatorial logic then:
    d = c + c is the same as d = a + b + a + b

    Andy
  • Heater.Heater. Posts: 21,230
    edited 2016-01-22 15:43
    That's right.

    We can have "parallel" at all levels of the computing stack. From distributed systems on the net, to super-computer clusters, to multi-core processors, to single processors with parallel instruction execution, to the very HDL used to design the chips.

    We can have "concurrent" on single core processors. Think concurrent users or multiple threads/processes being juggled by an OS scheduler.

    Spud, is right, it's all a matter of intent. Or shall we say context.

    Which is why I have never been satisfied by the simple definitions of "parallel" or "concurrent".

    They can have a great deal of overlap to add to the confusion.




  • I think context is likely the better, more inclusive meaning here, FWIW.

  • My own confusion was an artifact of current, trendy terminology, which mashes everything into parallel and multicore, both of which get used in ambiguous ways today. All the time.

    When this came up, I got fixated on it. I really do not do well with ambiguity like that. Now, to be fair, I am quite liberal in the structure of meanings and how they will or can form ideas...

    An example is that whole, "it's a big ask" chat we had a while back. I seem to parse that stuff just fine. No worries.

    But, in nearly every case where we do have more than one word for something, there is a distinction of some sort. They aren't identical. Almost never are. That is what I fixated on. Two different words had better darn well mean at least something different, or why have them?

    When I started with the higher level academic papers, intent fell out, or context as Heater is saying, as that distinction.

    (There is another detail discussion on how appropriate the word intent vs the word context is, but that's another day, maybe. I like context as it is more inclusive, and can communicate the distinction found here without the dependency of also needing to sort out what intent is too. So I'm happy at present.)
  • kwinnkwinn Posts: 8,697
    Heater. wrote: »
    That's right.

    We can have "parallel" at all levels of the computing stack. From distributed systems on the net, to super-computer clusters, to multi-core processors, to single processors with parallel instruction execution, to the very HDL used to design the chips.

    We can have "concurrent" on single core processors. Think concurrent users or multiple threads/processes being juggled by an OS scheduler.

    Spud, is right, it's all a matter of intent. Or shall we say context.

    Which is why I have never been satisfied by the simple definitions of "parallel" or "concurrent".

    They can have a great deal of overlap to add to the confusion.

    Agreed. Parallel and concurrent are not sufficient to clearly describe all the current methods of computing. I'm not even sure there is a comprehensive list or description of them. Off the top of my head I can come up with:

    Pipelined – each stage of the pipeline performs a function on it's data/instruction simultaneously.

    SIMD – a single operation is performed on multiple data items. Array processing?

    MIMD – multiple instructions performed on multiple data items.

    MISD – multiple instructions performed on a single data item.

    Distributed – software to split tasks between multiple computing systems.

    I'm sure there are more, but cannot think of them atm.

    What would we call splitting the video line generation between multiple cogs?
  • Heater.Heater. Posts: 21,230
    There we go. It's hopeless.

    As another example we have this weird modern term of the "cloud".

    Which often means services centralized on Google or Amazon or MS Azure etc.

    Yeah, yeah, they may have globally distributed systems but they are playing for the very centralized, i.e. them, lock in.

    Ah well.
  • ..
  • One of the synonyms, parallel, for concurrent in my opinion is not exactly correct. Concurrent means to exist or happen at the same time. ... but it does not indicate that concurrent events wont ever converge with one another. Where parallel is defined by .... a series or set (of lines, planes, surfaces, objects, etc.) side by side and having the same distance continuously between them. At the same time it indicates that they are always at a set distance and will not converge.

    Perhaps the term "Parallel processing" is incorrect by definition in the sense that an operation would never interact with a neighbor and always be at the same fixed distance.

    "Concurrent processing" might be technically more correct in the terms in which it is used in that you expect a certain amount of combined efforts or convergence between processes.
  • potatoheadpotatohead Posts: 10,253
    edited 2016-01-23 19:00
    That is an interesting observation. Of course, one does not necessarily need convergence to accomplish a task with parallel processing. Rendering frames, performing discrete element analysis of various kinds, particles, and other kinds of things never really converge as much as they just distribute work. There may or may not be local interactions.

    So parallel can sort of work, but it's not precise at all.


    start ......................................................................................> task complete

    start ...> sub task complete
    .................> sub task complete
    ....> sub task complete
    task complete


    I agree with you, which is why I've always preferred "concurrent multiprocessor" as the term for what a Prop and soon the P2 is. Andre' used it early on, and it made sense to me on first reading.

  • You have to think about BP (Before Parallel Processing) ...

    On a white board in some design room, trying to explain simultaneous multiple processor interaction, it may look as though they are all in Parallel, a consequence of how it was illustrated, but on every master clock cycle you have the opportunity to converge or share information between each processor. THIS is where it becomes a Concurrent process relationship instead of a Parallel process.

    Parallel is to Concurrent as Apple is to Fruit
  • kwinnkwinn Posts: 8,697
    You have to think about BP (Before Parallel Processing) ...

    On a white board in some design room, trying to explain simultaneous multiple processor interaction, it may look as though they are all in Parallel, a consequence of how it was illustrated, but on every master clock cycle you have the opportunity to converge or share information between each processor. THIS is where it becomes a Concurrent process relationship instead of a Parallel process.

    Parallel is to Concurrent as Apple is to Fruit

    Really nice simile. The only machine I can recall that could truly be called parallel had multiple alu's and register sets, all of which were executing the same instruction at the same time. IIRC that was one of the Cray systems.
  • MJBMJB Posts: 1,235
    kwinn wrote: »
    The only machine I can recall that could truly be called parallel had multiple alu's and register sets, all of which were executing the same instruction at the same time. IIRC that was one of the Cray systems.
    The Cray II (IIRC) I worked on in ~1983 had a 64 vector processor
    which was able to really do 64 MAC (* + ) operations like C = C + A * B (all 64 item vectors) in parallel. SIMD you would call it.
    This really could speed up our fluid dynamic simulations.

  • Heater.Heater. Posts: 21,230
    Beau,
    Where parallel is defined by .... a series or set (of lines, planes, surfaces, objects, etc.) side by side and having the same distance continuously between them. At the same time it indicates that they are always at a set distance and will not converge.
    Excellent. If we take that mathematical, geometrical, definition of "parallel" then parallel systems can never converge on a final result. They are always separated.

    Which turns out to be true for distributed databases and such.

    Cannot fight physics here.

  • Rayman wrote: »
    The concurrent page also explains the difference, maybe better:
    https://en.wikipedia.org/wiki/Concurrent_computing


    A single core processor can do things concurrently, but not in parallel.

    I think Rayman nailed it.

    Enjoy!

    Mike

  • Heater.Heater. Posts: 21,230
    It's exactly those definitions of concurrent and parallel that give me headache for being so vague.

  • It makes sense to me. A display, for example, requires no convergence. Each pixel could have its own little processor, each running the same code. The result is an array of values, for example. It's geometrically consistent with parallel.

  • "Parallel is to Concurrent as Apple is to Fruit" ... All apples are fruit, but not all fruit is an apple. So all concurrent processes could be considered parallel, but not all parallel processes are concurrent.

    Which brings up another question... In the English language, the term "OR" is always implied as an "Exclusive OR"... me? I just take both when offered an "OR" decision. <smirk>

  • potatoheadpotatohead Posts: 10,253
    edited 2016-01-23 22:12
    It's ambigious. Or means OR, in the boolean sense. Some people will write "and or", when they just need or, and they don't write "either" when they want XOR.

    Attorneys get this right. Few other people do. I watch for it in contracts.

    And I still maintain concurrent is a superset containing parallel. :) I see people out there saying they are orthogonal. They are, for subsets of concurrent!
  • As @Rayman said a single core can run concurrent processes but no parallel ones. So concurrent is not a superset and is not including parallel execution.

    I think this is quite obvious.

    Enjoy!

    Mike.
  • But it does. No part of parallel execution lies outside the domain of what concurrency describes.

  • msrobotsmsrobots Posts: 3,701
    edited 2016-01-24 02:01
    see post above.

    a single core can not run parallel processes, just concurrent ones. There is the difference.

    Mike
Sign In or Register to comment.