Shop OBEX P1 Docs P2 Docs Learn Events
The Future of Programming (ca. 1973) — Parallax Forums

The Future of Programming (ca. 1973)

Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
edited 2013-08-13 02:46 in General Discussion
Bret Victor, dressed as a 1970's IBM tech, gives a very interesintg talk about the future of programming from an early '70's perspective. He not only recreates the wonder of the new ideas spawned during that era, but he also uses it as a pedagogical tool to inspire the same creativity -- creativity which he "foresees" as possibly lacking "40 years hence" in 2013. 'Well worth watching all the way through!

BTW, his comments regarding parallel computing may resonate (or dissonate) with Propeller users.

Enjoy!

-Phil
«1

Comments

  • doggiedocdoggiedoc Posts: 2,243
    edited 2013-08-09 14:57
    I was thinking this was an archive video then I noticed was his earrings. Then I notice someone in the crowd carrying a laptop.... Then the wireless transmitter for the mic.
  • ElectricAyeElectricAye Posts: 4,561
    edited 2013-08-09 16:10
    I jumped into this and just figured it was your typical brain storming meeting at Microsoft circa 2009... At around minute 28 I then realized this guy must be from somewhere else. Especially when he said:
    "The most dangerous thought that you can have as a creative person is to think that you know what you're doing."

    Good philosophy.
  • ratronicratronic Posts: 1,451
    edited 2013-08-09 16:54
    I'll be the first to admit "I don't know what I'm doing ". After watching that video I'm know thinking that is a good thing!
  • Martin_HMartin_H Posts: 4,051
    edited 2013-08-09 17:12
    I like the overhead projector, they used to be commonplace but disappeared along with typewriters.
  • Mike GMike G Posts: 2,702
    edited 2013-08-09 18:14
    I enjoyed the video. Might need to make reference at a later time.
  • msrobotsmsrobots Posts: 3,709
    edited 2013-08-09 18:38
    I liked it very much and looked for other stuff of him.

    This one is even more interesting 'Inventing on Principle' http://www.youtube.com/watch?v=PUv66718DII

    Enjoy!

    Mike
  • wasswass Posts: 151
    edited 2013-08-09 18:47
    doggiedoc wrote: »
    I was thinking this was an archive video then I noticed was his earrings. Then I notice someone in the crowd carrying a laptop.... Then the wireless transmitter for the mic.

    and .... drinking from a plastic water bottle and ..... starting sentences with "So". People didn't speak like that in 1973.
  • Heater.Heater. Posts: 21,230
    edited 2013-08-10 00:51
    All in all not an inspiring video.

    Unlike the Englebart in 1968: https://www.youtube.com/watch?v=yJDv-zdhzMY

    Or Sutherland 1963 : https://www.youtube.com/watch?v=6orsmFndx_o

    And this is IBM creativity circa 2013: http://delimiter.com.au/2013/08/07/banned-qld-govt-outlaws-new-ibm-contracts/
  • GadgetmanGadgetman Posts: 2,436
    edited 2013-08-10 02:09
    That was a presentation given earlier this year, only made to look like it was from the 70s.

    Also, I believe that overhead projector is from the late 80s, or early 90s.
    (I believe we had one that looked like that at our office until around 2005)
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2013-08-10 04:13
    I have several odd feelings about this.
    a. I am not sure pretending to be 1973 offers any clairty into what was going on in 1973. We have since learned that a lot of what was 'new technology' had been developed much earlier and merely sat on by the government and big business.
    b. There tends to be a nostalgia created that affirms a myth. Is that intentional?
    c. Not sure much of this predicts the future

    In the genre of Brave New World, Fahrenheit 451, and 1984, the era was very concerned with future dystopia.. but none of these considered a personal computer was about to become the vehicle of mass communication and mass consciousness.

    If this really were a 1973 presentation, and if it had predicted Wikileaks, Facebooks, and so on.. it would be more profound.

    Sure, 1984 had 'Big Brother' watching everyone with video cameras and flying cars, but the protagonist had to do his work at a typewritter.
  • localrogerlocalroger Posts: 3,451
    edited 2013-08-10 07:21
    I have several odd feelings about this.
    In the genre of Brave New World, Fahrenheit 451, and 1984, the era was very concerned with future dystopia.. but none of these considered a personal computer was about to become the vehicle of mass communication and mass consciousness.
    ...
    Sure, 1984 had 'Big Brother' watching everyone with video cameras and flying cars, but the protagonist had to do his work at a typewritter.

    Wrong eras, Loopy. Brave New World was written in 1931 before computers existed at all, 1984 in 1948 when few people knew they existed, and Fahrenheit 451 in 1953 when it was still reasonable for Von Neumann to speculate that there would never be more than a handful of computers in the entire world. By 1973 computers were becoming available to modest sized businesses, hand-held calculators had appeared, movies like The Forbin Project and The Billion Dollar Brain had speculated on the role of computers in society, and it was becoming apparent that general purpose computers would soon be small and cheap enough to be owned by individuals. This represented an enormous expansion of possibilities, which would soon create both Apple and Microsoft, two companies that didn't yet exist and are among the most powerful in existence today.

    The point of the talk is that on the cusp of that expansion the experts who had struggled to bring the art to its present state had many varied ideas about how to leverage the vast increase in computing power which was visible on the horizon. Today, now that that expansion has happened, that open-mindedness has frozen into a status quo that no longer seems to realize other approaches are workable, much less that they might be even better.
  • potatoheadpotatohead Posts: 10,261
    edited 2013-08-10 10:47
    I liked this talk. Using the 70's style was a nice touch to get people to think about how we got here. "There won't be any markup languages or anything..." Seriously fun perspective! "They grow up with dogma", and it's really hard to get out of it. That time was early enough that nobody really knew the future and they just did stuff! Perspective indeed! "The most dangerous thought you can have as a creative person is to know what you are doing." "you become blind"

    Sometimes, I associate parametric CAD, it's solver modes which may be sequential or parallel (variational in some systems) and the idea of features that contain constraints to express intent and how they connect through variables, each other, and so on and reuse / programming.

    The sum of this is thinking parametrically. Take specific intent, generalize it, prove that through reasonable limits and boundaries and now the work is done for whole classes of mechanical parts.

    It is, of course, a very hard problem. In my professional capacity I regularly work with the very newest and capable CAD systems. That industry got sparked by sequential, history based parametric modeling. It exploded and then hit kind of a rut where the limits of that paradigm were found and optimized. After a considerable dead time, new ideas are starting to enter the scene again and they are parallel, not sequential, not always history, not always parametric. And I'm sorry if this is a little obtuse. It may take a short book to expand that, and if there is interest I will, but maybe enough of you understand CAD to grok it... we shall see.

    Anyway, throughout that building, reuse was paramount. Two modes: One, was to take components and use them elsewhere, and a side branch of that was to take classes of components and use them elsewhere. This got done and done reasonably well, but only within a given system. Cross CAD reuse is ugly stuff, intent is lost, and there are data representation issues too. Enter the new ideas and suddenly that kind of expands with some non history based parametric systems can begin to use data authored elsewhere and even do it with some intent and parametric design possible.

    The other mode of reuse is taking pieces of things to build new components lego style. Part or component complexity has reached human limits where a single person may be tied to a model for months. The newest systems that are dropping some of the old ideas, drudging up some abandoned old ideas, mixing in some new ones are enabling things like multiple users being able to work on the same model, or reuse scenarios that are more ad-hoc where the intent to reuse wasn't built in early on, normally denying that reuse, but the new paradigms allow for somebody to package it up anyway, put some intent in there and reuse anyway regardless of where it came from.

    Powerful stuff.

    The other day I was having to author a proposal. Hate those. Wordy things, testy, fiddly with all kind of terms, logic, pricing, conditions, constraints and the usual things we all know well. One of the biggest problems with these is determining the unique bits for a project then combining or pulling forward the standard things to reuse that labor going forward. I found myself wishing for some of the new ideas in simple word processing. Reuse in this way tends to have many of the same problems programming does. Connecting bits together just takes a long time. Some word processors can do "parametric" text, and of course there are things like TEX too. Kind of obtuse for ordinary people, but the same dynamic is there. Established ideas rule, some great ones lie in niches here and there and there are some new ones bubbling up, quite similar to the multiple users on one part scenario. Multiple people in a document happens these days. Not well, but it happens.

    What I liked about this talk is the idea that we really don't have it figured out . Reuse of things, defining intent, understanding the connections that exist, that dangle on reuse that need to be connected and most importantly, those that need to be understood on the combination of things.

    One of the things I liked about the Propeller was the idea that little bits can be written with the purpose of combining them in lean, largely accessible ways. In CAD, the move has been to use parallel processing to improve the computer ability to parse intent, think like we do, and act as a partner more than just a mere tool, freeing people to drill into the task at hand without having so much head space bothered with details. In the word processor, parallel processing could enable semantic processing that may well be capable of similar kinds of things. Take this chunk off of a part, use it on another and the union of the implied intents is obvious to anyone with some spatial and mechanical skill. Now the systems can do that, leaving people to position, move, and build rather than get into the mess of topology issues. Perhaps one day I can cut a paragraph, paste it into a document and see the semantics resolved. Two different company names, an old date, and or other things get sorted, resolved and presented in ways that I can make choices on and keep authoring the real thing, not sorting out tedious bits here and there, potentially getting lost.

    So then, in programming the use of parallel has many connotations. One is compartmentalizing things for reuse, another is to get complicated tasks done quicker, etc...

    The thing I got out of this talk is that we can still do a lot to enable reuse and help people focus on building things not spending so much time and energy on smaller details. And to be clear here, there is always planning. Most all of us here know how to plan for that kind of thing, build libraries and reach a point where we really are able to reuse and build quickly, until somebody else contributes. The paradigms will be different, and of course the call for standards is always there to get people thinking in similar ways so that we can think together.

    All of that is a parallel problem.

    My thought always goes back to semantics and using the high computer power we have now to act as an agent for us, partner, something able to get us out of the details and into what the goal is to a higher degree.

    Wouldn't it be interesting to copy some code, paste it and get a dialog that highlights the connections and potentials? Things that dangle, are undefined, etc...? Data sourced from different places contains intent we know we can use, but it's packaged and expressed in ways that require people to think about it a lot, often with the result of reinventing things over and over and over.

    Back to CAD one more moment. Before we saw the new approaches to things start to take hold, remodeling happened all the time for lack of any real way to reuse things, connect them together, etc... One group would build a car frame, another would build the car in different systems. A whole team would combine those, leaving each group to handle the intent they owned, communication was poor. Getting "dumb" models meant being able to drop things into place, but not much else. Today it's possible to blend those things together in ways not possible just a short time ago.

    Programming could use some of that kind of thinking, and that's the impression I got from this guy.

    Of course the other approach is to design that stuff away, keeping it lean and mean so that the details are at least not thick. No one answer here of course, just me riffing on that talk some.
  • PublisonPublison Posts: 12,366
    edited 2013-08-10 11:13
    Did ya' notice every time he walked in front of the projector lense, it did not produce a shadow. The projector was not pointed to where the output was. I don't think a 3M projector could have put out that much light for that large a screen to be readable. It was hard enough to make out transparencies in a well lite school room.
  • PublisonPublison Posts: 12,366
    edited 2013-08-10 11:16
    potatohead wrote: »
    I liked this talk. Using the 70's style was a nice touch to get people to think about how we got here. "There won't be any markup languages or anything..." Seriously fun perspective! "They grow up with dogma", and it's really hard to get out of it. That time was early enough that nobody really knew the future and they just did stuff! Perspective indeed! "The most dangerous thought you can have as a creative person is to know what you are doing." "you become blind"

    Sometimes, I associate parametric CAD, it's solver modes which may be sequential or parallel (variational in some systems) and the idea of features that contain constraints to express intent and how they connect through variables, each other, and so on and reuse / programming.

    The sum of this is thinking parametrically. Take specific intent, generalize it, prove that through reasonable limits and boundaries and now the work is done for whole classes of mechanical parts.

    It is, of course, a very hard problem. In my professional capacity I regularly work with the very newest and capable CAD systems. That industry got sparked by sequential, history based parametric modeling. It exploded and then hit kind of a rut where the limits of that paradigm were found and optimized. After a considerable dead time, new ideas are starting to enter the scene again and they are parallel, not sequential, not always history, not always parametric. And I'm sorry if this is a little obtuse. It may take a short book to expand that, and if there is interest I will, but maybe enough of you understand CAD to grok it... we shall see.

    Anyway, throughout that building, reuse was paramount. Two modes: One, was to take components and use them elsewhere, and a side branch of that was to take classes of components and use them elsewhere. This got done and done reasonably well, but only within a given system. Cross CAD reuse is ugly stuff, intent is lost, and there are data representation issues too. Enter the new ideas and suddenly that kind of expands with some non history based parametric systems can begin to use data authored elsewhere and even do it with some intent and parametric design possible.

    The other mode of reuse is taking pieces of things to build new components lego style. Part or component complexity has reached human limits where a single person may be tied to a model for months. The newest systems that are dropping some of the old ideas, drudging up some abandoned old ideas, mixing in some new ones are enabling things like multiple users being able to work on the same model, or reuse scenarios that are more ad-hoc where the intent to reuse wasn't built in early on, normally denying that reuse, but the new paradigms allow for somebody to package it up anyway, put some intent in there and reuse anyway regardless of where it came from.

    Powerful stuff.

    The other day I was having to author a proposal. Hate those. Wordy things, testy, fiddly with all kind of terms, logic, pricing, conditions, constraints and the usual things we all know well. One of the biggest problems with these is determining the unique bits for a project then combining or pulling forward the standard things to reuse that labor going forward. I found myself wishing for some of the new ideas in simple word processing. Reuse in this way tends to have many of the same problems programming does. Connecting bits together just takes a long time. Some word processors can do "parametric" text, and of course there are things like TEX too. Kind of obtuse for ordinary people, but the same dynamic is there. Established ideas rule, some great ones lie in niches here and there and there are some new ones bubbling up, quite similar to the multiple users on one part scenario. Multiple people in a document happens these days. Not well, but it happens.

    What I liked about this talk is the idea that we really don't have it figured out . Reuse of things, defining intent, understanding the connections that exist, that dangle on reuse that need to be connected and most importantly, those that need to be understood on the combination of things.

    One of the things I liked about the Propeller was the idea that little bits can be written with the purpose of combining them in lean, largely accessible ways. In CAD, the move has been to use parallel processing to improve the computer ability to parse intent, think like we do, and act as a partner more than just a mere tool, freeing people to drill into the task at hand without having so much head space bothered with details. In the word processor, parallel processing could enable semantic processing that may well be capable of similar kinds of things. Take this chunk off of a part, use it on another and the union of the implied intents is obvious to anyone with some spatial and mechanical skill. Now the systems can do that, leaving people to position, move, and build rather than get into the mess of topology issues. Perhaps one day I can cut a paragraph, paste it into a document and see the semantics resolved. Two different company names, an old date, and or other things get sorted, resolved and presented in ways that I can make choices on and keep authoring the real thing, not sorting out tedious bits here and there, potentially getting lost.

    So then, in programming the use of parallel has many connotations. One is compartmentalizing things for reuse, another is to get complicated tasks done quicker, etc...

    The thing I got out of this talk is that we can still do a lot to enable reuse and help people focus on building things not spending so much time and energy on smaller details. And to be clear here, there is always planning. Most all of us here know how to plan for that kind of thing, build libraries and reach a point where we really are able to reuse and build quickly, until somebody else contributes. The paradigms will be different, and of course the call for standards is always there to get people thinking in similar ways so that we can think together.

    All of that is a parallel problem.

    My thought always goes back to semantics and using the high computer power we have now to act as an agent for us, partner, something able to get us out of the details and into what the goal is to a higher degree.

    Wouldn't it be interesting to copy some code, paste it and get a dialog that highlights the connections and potentials? Things that dangle, are undefined, etc...? Data sourced from different places contains intent we know we can use, but it's packaged and expressed in ways that require people to think about it a lot, often with the result of reinventing things over and over and over.

    Back to CAD one more moment. Before we saw the new approaches to things start to take hold, remodeling happened all the time for lack of any real way to reuse things, connect them together, etc... One group would build a car frame, another would build the car in different systems. A whole team would combine those, leaving each group to handle the intent they owned, communication was poor. Getting "dumb" models meant being able to drop things into place, but not much else. Today it's possible to blend those things together in ways not possible just a short time ago.

    Programming could use some of that kind of thinking, and that's the impression I got from this guy.

    Of course the other approach is to design that stuff away, keeping it lean and mean so that the details are at least not thick. No one answer here of course, just me riffing on that talk some.

    Potatoe,

    How many books have you written? :) Well, there is probably at least two on these forums. :)

    I'll have to print that off and put it in the "library" for reading.
  • localrogerlocalroger Posts: 3,451
    edited 2013-08-10 11:42
    Publison wrote: »
    Did ya' notice every time he walked in front of the projector lense, it did not produce a shadow. The projector was not pointed to where the output was. I don't think a 3M projector could have put out that much light for that large a screen to be readable. It was hard enough to make out transparencies in a well lite school room.

    The projector was a prop; there was very obviously a camera in the mirror head.
  • PublisonPublison Posts: 12,366
    edited 2013-08-10 11:50
    localroger wrote: »
    The projector was a prop; there was very obviously a camera in the mirror head.

    My feeling exactly. Just surprised nobody picked up on that. I just remember my teachers walking in front of the projector and thinking, get out of the way so I can take some notes! :)
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2013-08-10 11:50
    Publison wrote:
    Potatoe, How many books have you written? Well, there is probably at least two on these forums. ...
    LOL! Perhaps we could coax him to include a tl;dr in his posts. :)

    -Phil
  • PublisonPublison Posts: 12,366
    edited 2013-08-10 12:11
    LOL! Perhaps we could coax him to include a tl;dr in his posts. :)

    -Phil

    Had to look that one up. :)
  • Heater.Heater. Posts: 21,230
    edited 2013-08-10 12:19
    potatohead,
    The other day I was having to author a proposal. Hate those. Wordy things...
    That made me chuckle. I guess I don't have to explain why:)

    An interesting glimpse of what goes in in modern CAD systems. I haven't really seem any since the early 1980's where the mechanical engineers at my work place had huge dediacated work station machines with two screens. One vector graphics screen, stroke written like Ivan Sutherlands demo, and one raster screen for text input output. That was the eara when any "PC's" around were CP/M based 8 biters so those workstations were mightily expensive and impressive.

    Anyway, what do you mean by "parallel processing" in those your CAD systems?

    I ask because parallel processing does not magically give you any capability that you don't have in a single processor machine. Parallel processing only gets you more speed. Hopefully as close to linearly more speed as your processor cound goes up.

    Are those CAD systems you describe using multi-core processors only or are have they moved to using clusters or even super computers?

    There must be a way to do that "compositional" or "parametric" proposal writing. I mean is isn't that how web pages are put together today.

    At a simple level there are templating systms that take a standard document in HTML and fill in the variable parts that are marked out some how:
    Proposal for the constuction of {{widgetName}} by company name
    {{ourCompanyName}} prepared for {{theirCompanyName}}. 
    {{Date}}
    

    Then web pages can also fill in some parts of the document from code written in PHP or whatever pulling data from databases.

    Then why have the HTML?, have the server side software generate the entire page(s) programmatically from some meta doccument format.

    All these things are being done today in a hundred different ways. Look at this forum for example, every page pulled from parts in different locaions and assembled just for me when I hit reload.

    As for multiple contributors to documents. That's a done deal already as well. Look at wikipedia for example. Or I would look at what software engineers use, source code repositories. What is a software project put a huge complex document, potentially with many contibuters and using reusable parts often with parametric building of the final output. Source code management systems like git (see hithub.com) allow for strict versioning, merging of contributions,
    many of the features you are looking for.

    Of course all that stuff is not what you want to be doing in Word, but the output from your document construction system could be in Word or PDF or whatever.

    A lot of the ideas for your proposal writing engine are in place already in hundreds of different implememtations, perhaps you are just the man required to stich some of thse ideas together into the perfect propsal generator. The world would thank you, this is a common problem all over.

    Ivan Sutherland was asked how he managed to produce an object oriented CAD system with interactive graphics and a solver and other features at a time when most of them were not heard of and all for a single thesis project. He said:
    No one told me it was hard.
  • Heater.Heater. Posts: 21,230
    edited 2013-08-10 13:50
    Loopy,
    ...but none of these considered a personal computer was about to become the vehicle of mass communication and mass consciousness.
    As pointed out your examples are from much earlier in history, but surely the computer as mass vehicle of communication was in the imagination in the 1960's. Ever notice the video conferencing in "2001 A Space Oddessy" not to mention that there was computers everywhere. There are many other examples.

    Even in that video we see the DARPA document from 1963 presented which contains:
    ADVANCED RESEARCH PROJECTS AGENCY

    Washington 25, D.C. April 23, 1963

    MEMORANDUM FOR: Members and Affiliates of the Intergalactic Computer Network

    FROM: J. C. R. Licklider
    ..
    ..
    , the problem is essentially the one discussed by science fiction writers: “how do you get communications started among totally uncorrelated “sapient” beings?”
    ...
    ...

    If that document is not considering a computer as vehicle of mass communication I don't know what is.

    You are right about not predicting the social impact of all this networking, Wikileaks, Facebooks, the erosion of privacy etc etc. But that was the job for scifi writers not the techno geeks who had here work cut out to build it. Perhaps that was written about somewhere.
  • Heater.Heater. Posts: 21,230
    edited 2013-08-10 14:26
    ElectricAye,

    "The most dangerous thought that you can have as a creative person is to think that you know what you're doing."

    Good philosophy indeed. I like it because generally I have no idea what I'm doing:) Not that I'm implying it says anything about my creativity.

    Pretty much every project I have worked on demanded learning a ton of new stuff before even getting started. Or at least muddling along and trying to pick it up on the fly. Different programming languages, different operating systems, different application domains, programming paradigms (I hate that word) etc etc.

    That is sometimes very stressful when people expect you to get something done. But then I generally take the attitude that if I really can't do the job it's their fault for hiring the wrong guy for the project:) Somehow, miraculously, that has never happened.

    Also, if I knew everything about how to complete a project before I started it would be incredibly boring. Where is the fun in regurgitating the same stuff over and over. If it were boring I'd have zero chance of completing it.
  • Mike GMike G Posts: 2,702
    edited 2013-08-10 14:48
    Pretty much every project I have worked on demanded learning a ton of new stuff before even getting started. Or at least muddling along and trying to pick it up on the fly. Different programming languages, different operating systems, different application domains, programming paradigms (I hate that word) etc etc.
    Thank you sir, may I have another...

    Animal House (1978)
  • Heater.Heater. Posts: 21,230
    edited 2013-08-10 15:05
    Mike G.

    Eh?
  • Mike GMike G Posts: 2,702
    edited 2013-08-10 15:45
    Heater, simply agreeing with you. Many projects, many teams, many languages, many timelines...

    The quote is from the movie Animal House and has nothing to do with programming.
  • potatoheadpotatohead Posts: 10,261
    edited 2013-08-10 16:06
    Re: What does parallel give you that a sequential system doesn't?

    Given current peak sequential CPU processing throughput limits, parallel gives two things:

    1. Reduced time to compute because an increasing number of computations can be done concurrently.

    2. Larger models. Ram was a big constraint we've blown through and are good for a while now. Those models can now be smarter too, so "large" has two axis, one just data size, the other being data complexity.

    Oh, and I guess a third is an increased ability to devote CPU to the "agent" or "partner" element of things where computers infer relationships, intent, meaning based on the geometry they are given, but I could lump that into "smarter" and call it good.
  • potatoheadpotatohead Posts: 10,261
    edited 2013-08-10 16:17
    @Heater, "can't we do the parametric document?"

    Well, sort of. A whole lot of what we see today is simple assembly or aggregation of documents. Systems that parse the semantics to help on that front are still very new and pretty much entirely missing from the ordinary application workflow. We get stuff like syntax or grammar checking, spelling and other basic mechanics, but we don't get any smarts comparable to what is being done in CAD to infer intent and sort out the merge of complex relationships.

    So yes, with some effort one can do that. Honestly, the real gain is being able to do it across the network. If the data were sourced locally, we don't get too much besides a presentation layer that couldn't be done on an old 8 bit computer. (where I first created conditional documents for real-estate contracts for an uncle of all things on a lowly C64. I forget the name of the program, but it allowed for conditional sourcing of document and text replacement using tokens. It gave him quite the advantage for a while.)

    And the state of things today is where CAD was and still largely is, but for a couple of leaders. If I have a particular document in mind, I can very easily build one that does many of the things I wrote. Attorneys do this now in Word Perfect, which is built for that kind of thing. Takes planning, and the issue with planning is risk.

    Does the plan actually see use? If it does, the investment will return nicely. If it doesn't, then the whole thing is kind of crappy and it can take more time than just authoring new to sort out.

    Where I'm headed with this is the ad-hoc thing we humans see easily, but then the details of reuse in many contexts ends up a wash on so many details. And yes, we have standards for that, planning for that, etc... None of that is wrong or bad. Works well! But it doesn't work when there are unanticipated changes, which is what plagues the CAD people, who have invested and continue to invest on that case, because it's always the dominant case and something that people will pay very large amounts of money to get improvements on.

    To me, that's a parallel problem because solving it seems to take an order more CPU peak throughput than we currently have in sequential systems.

    If we somehow get 8Ghz CPU's and memory that can handle that kind of speed, great! I'm quite pleased to see all the sequential stuff speed up nicely and put the problem off for a while. :)

    Re: Just the man...

    Well maybe some day. I don't understand how current semantic systems work. The ones I have seen and used are quite potent, but... the CPU and data costs are high. Really high. And the ones I've used basically are research tools that can parse documents for meaning, not just keywords. They can know the difference between "hand saw" and "saw hand" very significantly improving research productivity over keyword type systems. Currently, they appear to automate indexing in meaningful ways very similar to what a person would do when tasked with feeding research teams and answering queries. They can also do neat stuff like parse a shared data repository, read e-mail, documents written, etc... and from that tell you who knows what, in addition to what is published, patented, in the IP repository that may be relevant to the task at hand.

    That is kind of similar to what the CAD systems are doing with geometry models. Move a hole and the system knows a ton of things now and can infer the meaning of the hole and surrounding geometry in ways that people naturally do. This saves a bunch of time over either having to put that intent in there hoping it's applicable to changes, or having people just do the work directly, which leads to "detail saturation" and errors. Bone head errors.
  • potatoheadpotatohead Posts: 10,261
    edited 2013-08-10 16:20
    Re: Post and TL;DR.

    Well, that one took about 15 minutes to author. I watched the thing, had the thoughts, blasted it all into the little window and went for a long drive to meet Mrs, who is staying with family in Central Oregon. Again, a parallel approach.

    You see, I could queue those thoughts, post a couple related ones, keep 'em in my head, do the work, come back, respond, post the next one, and it all happens in little chunks. Or... Why not just put them out there, be done with it as I can always reread them myself should I need to, let you guys react and we work in much bigger chunks, or whatever. Higher throughput and more robust doing it that way, so I just do. It's like larger word sizes! 8 bits is nice and easy, but it takes a lot of little ops to get bigger stuff done. 32 bits is much more roomy. Sort of like that.

    I think the TL;DR is rude, frankly. You probably won't see it out of me, unless the discussion is crappy. :)

    In any case, I'm back now with a thread filled with thoughts, I can batch out a few of these and go turn on my P2 having put it out there. Seems a great way to work to me. Often the work required to whittle it down, or break it into segments dilutes the higher order thoughts that connect. I don't like that.

    If you don't like or prefer bigger chunks, trust me. I get it. No worries. There is a scroll bar, and I know I've been scrolled many times before. You won't be the first, nor the last. And if you ask me, I can do little chunks where needed too.
  • Heater.Heater. Posts: 21,230
    edited 2013-08-10 16:28
    So, yep, parallel processing is only "smarter" because we have more memory and more processing power. In some applications the increased network bandwidth that comes along with parallel helps, Google search for example. But parallel is not in any purely logically way different than having a single CPU.
  • potatoheadpotatohead Posts: 10,261
    edited 2013-08-10 16:44
    Agreed. But there are real limits and those form the boundary cases for discussing these things. In my mind there is a sort of truism in play here as well:

    At any given time we have a peak sequential capability. it is what it is, with our understanding of physics pushing the edge on that. Parallel is always there to augment that peak, limited by our ability to express the problems in ways that can be managed.

    Right now at this time, semantics, intent, basic machine learning, etc... all appear to exceed our sequential capability, meaning those fall into the parallel problem bucket. Then again, who knows? There may yet be ways to factor the problem into the sequential space too.

    Regarding the video, all I know is I ended up drawing a parallel between some of the ability to infer, assist and augment I'm seeing in complex designs and their networks of logic, math, etc... and these programming ideas, reuse and so on.
  • Heater.Heater. Posts: 21,230
    edited 2013-08-10 18:07
    Did anyone notice that Bret Victor has another video where he shows some great demos of his attempts at the future of programming: http://www.youtube.com/watch?v=PUv66718DII ?
    That video is about a somewhat deeper philosophy of life but the demos illustrate that very well.
Sign In or Register to comment.