Recipe for folded space: First you take 2 black hole singularities... bring them together slowly... and when the gravity wells of each singularity interact... oh... you touched one? uummm... bye-bye.
Ok, joking aside... I can't think of a way to fold space without some serious gravity coming into play... last I checked... lots of gravity would tend to separate my body parts pretty quickly with no attempt to keep them nicely labeled and ordered so I can be put back together again... chaos wins.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
There's nothing like a new idea and a warm soldering iron.
Personally, I think, one day we'll figure out how to do it(fold space,warp). If for the simple fact that one day we could see no way a human could fly, or go to the moon, or etc. Unfortunatly this isn't Star Trek, so I don't think we're "old" enough as a species to have the understanding of the universe we live (in/on/through) yet. But Positive Mental Attitude and science; we'll keep chiping away at it
So sorry to hear of your battles with Crohn's disease. :-(
Must be awful to have cramps and upset tummy very often.
Wish I could do something....
Thanks. I've been in remission for most of the past 15 years. A consultant once told me that it tended to burn itself out as one got older, which seems to have happened in my case.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
My favorite teacher in HS had it...she often suffered from a skin rash that she said was
caused by it. I really miss her...been almost 4yrs since I was in HS..maybe I will call her
tonight and see how she is doing.
I think she is about 50 now...maybe her symptoms have started to wane.
I downloaded the random code by Chip and read the comment at the beginning of the code....but I'm still uncertain about it's randomness.
He certainly seems confident that it is truly random and not like the repeating sequence (however long the sequence) that pseudo-random
generators produce. But the randomness seems to depend on pseudo-random interference to the pll that then is supposed to give rise to
pure randomness....somehow it seems unlikely, not that I would ever imagine I was smarter than Chip because I'm sure I'm not even close
But it seems rather like depending on a random event from a noisy diode that has to be tapped regularly to keep it spewing out static...seems
questionable somehow. Now I understand that small electrical variations in a circuit also might have small effects on the noise from a diode..like tiny
voltage fluctuations..etc It will take someone smarter than me to decide for certain about true randomness in a circuit. Even radioactive decay
might be influenced by external forces we don't understand...and thus be ever so close to true randomness but not quite. Still the random noise
from a diode or radioactive decay might be near enough to still be usable in a study like PEAR....and so might Chip's random generator. I just don't know.
Here is the comment
A real random number is impossible to generate within a closed digital system. This is because there are no │
│ reliably-random states within such a system at power-up, and after power-up, it behaves deterministically. │
│ Random values can only be 'earned' by measuring something outside of the digital system. │
│ │
│ In your programming, you might have used 'var?' to generate a pseudo-random sequence, but found the same │
│ pattern playing every time you ran your program. You might have then used 'cnt' to 'randomly' seed the 'var'. │
│ As long as you kept downloading to RAM, you saw consistently 'random' results. At some point, you probably │
│ downloaded to EEPROM to set your project free. But what happened nearly every time you powered it up? You were │
│ probably dismayed to discover the same sequence playing each time! The problem was that 'cnt' was always │
│ powering-up with the same initial value and you were then sampling it at a constant offset. This can make you │
│ wonder, "Where's the end to this madness? And will I ever find true randomness?". │
│ │
│ In order to have real random numbers, either some external random signal must be input, or some analog system │
│ must be used to generate random noise which can be measured. We're in luck here, because it turns out that the │
│ Propeller does have sufficiently-analog subsystems which can be exploited for this purpose -- each cog's CTR │
│ PLLs. These can be exercised internally to good effect, without any I/O activity. │
│ │
│ This object sets up a cog's CTRA PLL to run at the main clock's frequency. It then uses a pseudo-random │
│ sequencer to modulate the PLL's target phase. The PLL responds by speeding up and slowing down in a an endless │
│ effort to lock. This results in very unpredictable frequency jitter which is fed back into the sequencer to │
│ keep the bit salad tossing. The final output is a truly-random 32-bit unbiased value that is fully updated │
│ every ~100us, with new bits rotated in every ~3us. This value can be sampled by your application whenever a │
│ random number is needed.
> A real random number is impossible to generate within a closed digital system. This is because there are no │
>│ reliably-random states within such a system at power-up, and after power-up, it behaves deterministically. │
>│ Random values can only be 'earned' by measuring something outside of the digital system. │
For random earning, what about those old smoke detectors? Didn't the detector use some kind of radioactive isotope emitter that a sensor picked up on... a watchdog, or intensity measurement... when the particles didn't arrive, then smoke, humidity, or burning dinner, set it off.
So, if the sensor were much more sensitive - or the emitter dampened, then it *might* be set to small numbers of particles. Quantum mech. says the emission of a particle from the nucleus of a radioactive isotope is not predictable (within the ranges of selected probablities.) (That's the basis of the Schroedingers Cat thought experiment.)
@Holly, even if the PEAR lab or its experiments were flawed, that schematic you posted in very intreguing... I like your experiment suggestion.
That is a very good idea about the smoke detector.
I guess it would just be the average length of time between particle detections that would
form the random bit stream...probably would not be anywhere near 50/50 like the noisy diode
but still usable.
HollyMinkowski said...
Well, to bring microcontrollers into this esoteric subject matter consider the Princeton Pear Lab. They have small boxes scattered around the world
, these boxes contain a quantum noise source (I think it's a noisy diode) that is monitored by a controller. The devices plug into routers and
send data about fluctuations outside normal chance back to a central location where it is analyzed. They claim statistically significant deviations
from pure chance in the hours leading up to dramatic world events such as 9/11.
(No offense intended - take this as a comment, not a criticism, okay?)
Predicting past events isn't very interesting, no matter how purely random the generators of the "predictions" might be. Every day there are countless events occurring in the world, and countless correlations between events. The ability to identify some of them after the fact doesn't require anything outside of our normal ("scientific", "natural") understanding of the world around us.
Reliably predicting _future_ events would be a whole different matter, but "psychics" don't seem too eager to take up that challenge, do they?
In short, I wouldn't be too excited about this research no matter how good the random number generators were, unless the giant methodological flaw were fixed. Put together a protocol in which "dramatic world events" are reliably predicted _before they happen_ and I'll start paying attention.
There is a long story behind this - but the whole reason I got into using the Basic Stamp is because of my research into "Folding Space", or "Warp" possibilities - I was needing to create some complex interactions between magnetic fields in multiple coils to model a concept, and I needed to use a microcontroller to coordinate and adjust the field pulsing. In between my various oddball experiments, I research any *reasonable* theories out there.
Has anyone looked over the original Alcubierre metrics? Or the modified field that came out of that?
Also - @pwillard - gravity is not a primary force - if space can be folded or warped, in fact. It is the folding/warping of space that gives rise to gravity. There is a property of whatever gives matter "mass" that causes space to warp. What we perceive as gravity is actually just the local curvature of space.
One of the leading theories as to what creates a warping of space is that particles of a particular spin actually create a tiny unit of space around themselves, and that the interlinking of these microspaces creates what we perceive as space. So we don't need a lot of gravity - we need a lot of whatever it is that warps space and gives rise to gravity.
As best as I understand, anyway...
The LHC, which is now scheduled to be run up in October, might shed some light on what it is that gives matter mass, and will hopefully answer such seemingly mundane questions as What is inertia? and why DOES, in fact, an object in motion tend to remain in motion unless acted upon by an outside force? These seem like obvious questions, but at the moment, we do not have an explanation for this at the quantum level.
> A real random number is impossible to generate within a closed digital system. This is because there are no │
>│ reliably-random states within such a system at power-up, and after power-up, it behaves deterministically. │
>│ Random values can only be 'earned' by measuring something outside of the digital system. │
For random earning, what about those old smoke detectors? Didn't the detector use some kind of radioactive isotope emitter that a sensor picked up on... a watchdog, or intensity measurement... when the particles didn't arrive, then smoke, humidity, or burning dinner, set it off.
So, if the sensor were much more sensitive - or the emitter dampened, then it *might* be set to small numbers of particles. Quantum mech. says the emission of a particle from the nucleus of a radioactive isotope is not predictable (within the ranges of selected probablities.) (That's the basis of the Schroedingers Cat thought experiment.)
This and other mentions in this thread brought this to mind! Is not "truly random" just another word for what we can not digitally measure yet! Wouldn't the understanding of anything, bring it's relivance below the Line of "random-psychic-paranormal- or spritual"
·Lets face it, the very instant man is smart enough to understand and measure and duplicate something, it gets sold at K-mart for $19.99 and random then becomes his next achievment.
As digital becomes infinately closer to DC current then random appears, and when digital gets it's next nanowatt upgrade, we will push random up to another value and give it also a name worthy of a dieity!
A true random number generator? Try the service counter at your local department store.
Though the working title is about 'folding space', I am actually more interested in how parallel lines can be seen to converge. It is all about using the geometry to navigate. When we drive on the freeway, we presume everyone will stay in their lane. Our assumption is that everyone accept a parallel route. But if there are conditions that cause two parallel vectors to converge, there is an opportunity to take a short cut to a destination. And if there is an alternative view that says no parallel vectors exist, we might be in a safe mode to go really fast.
So you see, I am really taking a navigators point of view. Putting together black holes is not really a practical approach to inter-galactic travel, is it? A pragmatic revision of geometry may get us to the next star at nearlight speed or maybe faster.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Ain't gadetry a wonderful thing?
Well, to bring microcontrollers into this esoteric subject matter consider the Princeton Pear Lab. They have small boxes scattered around the world
, these boxes contain a quantum noise source (I think it's a noisy diode) that is monitored by a controller. The devices plug into routers and
send data about fluctuations outside normal chance back to a central location where it is analyzed. They claim statistically significant deviations
from pure chance in the hours leading up to dramatic world events such as 9/11.
(No offense intended - take this as a comment, not a criticism, okay?)
Predicting past events isn't very interesting, no matter how purely random the generators of the "predictions" might be. Every day there are countless events occurring in the world, and countless correlations between events. The ability to identify some of them after the fact doesn't require anything outside of our normal ("scientific", "natural") understanding of the world around us.
Reliably predicting _future_ events would be a whole different matter, but "psychics" don't seem too eager to take up that challenge, do they?
In short, I wouldn't be too excited about this research no matter how good the random number generators were, unless the giant methodological flaw were fixed. Put together a protocol in which "dramatic world events" are reliably predicted _before they happen_ and I'll start paying attention.
The PEAR people also claimed that individuals could influence the randomness of their RNGs, via some sort of interaction between consciousness and quantum events. A spin-off from PEAR is this company that sells their RNGs with software so that users can test their "mind over matter" abilities for themselves:
Leon said...
The PEAR people also claimed that individuals could influence the randomness of their RNGs, via some sort of interaction between consciousness and quantum events. A spin-off from PEAR is this company that sells their RNGs with software so that users can test their "mind over matter" abilities for themselves:
The key word, of course, is "sells".
If they had an interface that could respond to simple thought, we'd all be using it with our projects here, and avoiding the hassle of having to cut holes in project boxes for the on-off switches. What they've got, you can be sure, is a device that works (fairly) randomly, and users who reliably convince themselves that they see patterns in the randomness. Human beings are really good at that.
The problem is that these effects are not large and they're not reliable / reproducible. There's evidence that they can be influenced by the "mood" of the people involved, so a strongly biased "observer" can influence the effect. For clairvoyance, it helps if the individuals involved can quiet their minds and training or experience helps. All of these things run counter to the way scientific experimentation is usually done. That doesn't mean that these effects are absent or "quackery", just that they're very hard to measure and that we may need to find new methods to measure them that respect some of the subtleties involved.
The recent arguments over the use of BPA plasticizers is an example of this methodology "gap". The standard way to test such chemicals is to increase the amount being given to test animals or cultures until the test subjects die, then back off to establish maximum safe levels. That works for most toxins. The trouble is that hormone-like substances like BPA have another "knee" in their toxicity curve at very low doses where they have profound effects on development in immature animals at specific times in development and relatively little effect on adult animals. The accepted methodology for testing potential toxins misses this effect entirely. That doesn't mean the problem isn't there. It just means that we don't see it with the tools at hand.
Post Edited (Mike Green) : 6/23/2009 2:34:09 PM GMT
Mike Green said...
The problem is that these effects are not large and they're not reliable / reproducible. There's evidence that they can be influenced by the "mood" of the people involved, so a strongly biased "observer" can influence the effect. For clairvoyance, it helps if the individuals involved can quiet their minds and training or experience helps. All of these things run counter to the way scientific experimentation is usually done. That doesn't mean that these effects are absent or "quackery", just that they're very hard to measure and that we may need to find new methods to measure them that respect some of the subtleties involved.
The recent arguments over the use of BPA plasticizers is an example of this methodology "gap". The standard way to test such chemicals is to increase the amount being given to test animals or cultures until the test subjects die, then back off to establish maximum safe levels. That works for most toxins. The trouble is that hormone-like substances like BPA have another "knee" in their toxicity curve at very low doses where they have profound effects on development in immature animals at specific times in development and relatively little effect on adult animals. The accepted methodology for testing potential toxins misses this effect entirely. That doesn't mean the problem isn't there. It just means that we don't see it with the tools at hand.
Well, absence of evidence is not evidence of absence, true. However, if they're not detectable using the tools at hand, what is the evidence that these effects are there at all? Unfortunately, what is usually purported to be evidence - personal experience, intuition, collections of·anecdotes - is far more crude than scientific experimentation. There may be some sense in which scientific methods are not good at dealing with uncertainty, but in fact they're far better at it than anything else we have. Personal reports of efficacy, which are the real source of most belief in paranormal abilities are in fact completely dead in the water when it comes to dealing with uncertainty. So are collections of anecdotes, and·human intuition is notoriously insensitive to uncertainty. In fact I'm unaware of anything other than standard scientific methodologies that even make an attempt to account for uncertainty. In my judgment, dealing with uncertainty is exactly what scientific methods are best at, and I'm very surprised to have heard you say that you think otherwise.
The measurement issue also seems to run the other direction.·It is not scientific experimentation that lacks sensitive, well-defined measurement: it's everything else. By any reasonable standard, it is·generally when measurement issues are ignored that positive psychic results are "found". Just as absence of evidence is not evidence of absence, the fact that a method fails to find an effect is not evidence that the method is insensitive. It may well be that there is no effect to be found.
I do personally think that there is very likely is something to acupuncture, in the sense that it is probably effective for some things in some circumstances. But what will convince me will be evidence generated by methods that eliminate alternative hypotheses (most notably random error), not self reports devoid of methodology. The fact that there are potential mechanisms, as you already pointed out,·helps a great deal as well.
Re. clairvoyance, what distinguishes non-reproducible "hits" from mere chance? If I flip a coin and call "heads" and it lands heads, is that a clairvoyant ability? If I flip it 100 times and get it right 50 of those times, is that a 50% reproducible clairvoyant ability? In my book, it's a 100% failure. Now, I do recognize that there might be a real effect that occurs only under certain circumstances, and we don't yet know what those circumstances are. Perhaps someday we'll be able to specify conditions for testing under which the effect IS reproducible. But until we can do that, the effect is indistinguishable from cherry-picking of random events, and because we do know that to be a very common occurrence, it's easily the better explanation.
Finally, if the effects are so small that they require sensitive random number generators and millions of trials to be detected, they most certainly are NOT the same kinds of effects that people not using such devices and protocols claim to detect through casual observation. You may get evidence of something, but it's not the same "something" that psychics are claiming.
Post Edited (sylvie369) : 6/23/2009 5:00:16 PM GMT
When I was taught statistics we were told that using too many trials increases the probability that the results will be due to chance alone. Lots of paranormal researchers don't seem to know about that problem. They also tend to combine the data from numerous studies to get their "significant" results, which is valid in certain circumstances, but not for those kinds of experiments.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
Leon said...
When I was taught statistics we were told that using too many trials increases the probability that the results will be due to chance alone. Lots of paranormal researchers don't seem to know about that problem. They also tend to combine the data from numerous studies to get their "significant" results, which is valid in certain circumstances, but not for those kinds of experiments.
Leon
I don't think it's strictly true that using too many trials (or too big a sample size) increases the probability of results being due to chance alone. It does increase the probability that a substantively meaningless result will nonetheless be "statistically significant", though. The slow-but-steady move towards use of effect sizes and power analyses (and away from classical statistical hypothesis testing) will take care of that problem.
Pooling data after the fact for the purposes of finding "significant" results is just a fancy way of cherry-picking. It _might_ serve as a basis for identifying new things to study, but it does not produce evidence in and of itself. This is a problem that people doing real science have to deal with. Take a look at this page about metaanalysis:
Meta-analysis was the word I was trying to remember. It was used extensively by the PEAR people. I should use the past tense as PEAR was disbanded a few years ago; Princeton found its presence on their campus rather embarrassing.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
Meta-analysis was the word I was trying to remember. It was used extensively by the PEAR people. I should use the past tense as PEAR was disbanded a few years ago; Princeton found its presence on their campus rather embarrassing.
Leon
Ah, the law of truly large numbers. Yup, that's what I was talking about earlier, when I suggested that identifying correlations after the fact is not very interesting. That's not quite the same as what I thought you meant.
Diaconis is a very interesting character. Check out this article, and especially his analysis of the fairness of a coin toss:
Re. Meta-analysis, did the PEAR people identify _in advance_ criteria for selection of studies? Did they make an effort to include a representative sample of studies? My suspicion, obviously, is that they simply chose sets of studies in order to "find"* significant results.
* "Manufacture" would be the more appropriate word.
OK, hypothetically, let's discard anything that relies on 'law of truly large numbers' the wiki says:
"""
For a simplified example of the law, assume that a given event happens with a probability of 0.1% in one trial. Then the probability that this unlikely event doesn't happen in a single trial is 99.9% = 0.999.
In a sample of 1000 independent trials, the probability that the event doesn't happen in any of them is 0.9991000, a probability of 36.8%. The probability that the event happens at least once in 1000 trials is then 1 − 0.368 = 0.632 or 63.2%!
This means that this "unlikely event" has a probability of 63.2% of happening if 1000 chances are given. In other words, even given a highly unlikely event, the chance that it never happens, given enough tries, is even more unlikely.
"""
Therefore we must discard the vast majority of the results of quantum mechanics, which is founded on statistics with *vastly* larger numbers than these. Look at quantum well phenomena, or super cooled phenomena. In fact, if we accept that the above is scientifically significant, then the junctions in microchips *can not function* because >%99 of the electrons will fly off to some other galaxy with only 1% at the P-N line. Therefore, the Propeller is an illusion.
True science strives to be 'fair and balanced' in its objectivity...
I'd never heard of Diaconis before. I have heard of Mosteller, of course. Richard Wiseman, a professor of psychology here in the UK, started off as a magician, as well. He spends a lot of time debunking paranormal studies:
No, the PEAR people didn't do any of that stuff. They don't appear to have been selective, though. On the contrary, they lumped together lots of disparate studies.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
> On the contrary, they lumped together lots of disparate studies.
Leon (and Slyvie too) ... not trying to raise your ire here, but this is what 'accepted' medical research does all the time. (So too standard psychological research.) It's called "clinical trials":
" As positive safety and efficacy data are gathered, the number of patients is typically increased. Clinical trials can vary in size from a single center in one country to multicenter trials in multiple countries. " - wiki, " clinical trial ".
So I guess with quantum mechanics, we have to throw out medicine too?
I always liked the notion of journal for "anomalous results". I know they exist (the journals). There are always phenomena that appear anomalous ... they may have fanciful explanations or explanations that are at odds with existing accepted dogma (which doesn't mean they're fanciful), but the phenomena appear to exist. Quantum mechanics started that way.
Homeopathy is one example ... The explanation for it doesn't make sense in terms of how the world seems to work, but there are studies showing a clinical effect not very different from the (small) clinical effect from conventional medicines. If the claim is that the effect of homeopathic treatment is too small ... the treatment effect of conventional medicine is similar ... should both be discarded as ineffective? Why is it that only the "poor cousin" of homeopathy is rejected out of hand? It's mostly about money. There's little money to be made from producing homeopathic remedies and the explanation given for how they work doesn't make sense in conventional biology / chemistry.
There's good evidence that homeopathy was clinically helpful in the treatment of the 1918 Influenza epidemic while conventional medical treatments (as limited as they were at the time) were not.
My point is that, rather than rejecting anomalous results out-of-hand, we should be saying to ourselves "That's interesting. Something strange seems to be going on here. What can we learn from it? Is there somewhere where our knowledge of the universe is not as complete as it should be? What can we learn from this?" At worst, we'll find an explanation for a phenomenon that makes sense at the fringes of our knowledge, but is at best a toy given our current state of engineering knowledge. It wouldn't be the first time something like that has happened. At best, we might have some kind of fundamental breakthrough, perhaps in a direction unexpected at the beginning of the investigation.
RE: homeopathy - this is a good example of something that is, as you say, inexplicable yet works - and is something that people would love to 'debunk' as the efficacy of these medications does not fit the standard models of physics or chemistry. Yet there is an astounding amount of solid, clinical use --- in Europe more than America, and in Germany especially. To those who thiink it's bunk, because it 'hasn't worked for them, or someone they know' that's because either the medical practitioner didn't find the correct medicament, or - far more frequent - because the patient did not consistently follow the protocol. Of course, these things, like anything else can only be used where there is reasonable grounds for it working.
> That's interesting. Something strange seems to be going on here. What can we learn from it? Is there somewhere where our knowledge of the universe is not as complete as it should be?
Well said. Isn't that what science is really all about?
CounterRotatingProps said...
> On the contrary, they lumped together lots of disparate studies.
Leon (and Slyvie too) ... not trying to raise your ire here, but this is what 'accepted' medical research does all the time. (So too standard psychological research.) It's called "clinical trials":
" As positive safety and efficacy data are gathered, the number of patients is typically increased. Clinical trials can vary in size from a single center in one country to multicenter trials in multiple countries. " - wiki, " clinical trial ".
So I guess with quantum mechanics, we have to throw out medicine too?
Cheers
- Howard
(a skeptic too)
Meta-analysis is accepted as appropriate for analyzing some medical data, it's simply not appropriate for the type of experiments conducted by the PEAR people.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
Homeopathy has never been shown to be superior to treatment with a placebo in a properly conducted trial, AFAIK. How can administering water to a patient with not a single molecule of an active substance possibly have an effect on someone, even if it is shaken by a machine or a person at each dilution?
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
Your statement is simply not true. It has been shown to be superior to treatment with a placebo in several unrelated trials. Whether it's superior to conventional pharmaceuticals in similar circumstances is still unanswered. I don't claim that the explanation of how it works is true, only that, if you prepare a solution a particular way and apply it to lactose beads which people dissolve under their tongue and compare the clinical outcomes to the same source of lactose beads without the solution applied, that in a statistically significant number of cases, the clinical outcomes are better when the prepared solution is used. The differences are not large, but they're in situations where large populations are involved and the overall costs of care can be markedly influenced with small changes in severity or duration of symptoms, things like influenza and childhood infectious diarrhea.
Comments
@Leon:
deep, thorough-going discussions, yes - arguments, no.
> There are some very bright people to be found there, as well as a lot of nutters.
LOL - there's enough nutters in near proximity to me already ... that just confirms that I should stay out of there!
- Howard
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Ok, joking aside... I can't think of a way to fold space without some serious gravity coming into play... last I checked... lots of gravity would tend to separate my body parts pretty quickly with no attempt to keep them nicely labeled and ordered so I can be put back together again... chaos wins.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
There's nothing like a new idea and a warm soldering iron.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Shawn Lowe
When all else fails.....procrastinate!
So sorry to hear of your battles with Crohn's disease. :-(
Must be awful to have cramps and upset tummy very often.
Wish I could do something....
Thanks. I've been in remission for most of the past 15 years. A consultant once told me that it tended to burn itself out as one got older, which seems to have happened in my case.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
My favorite teacher in HS had it...she often suffered from a skin rash that she said was
caused by it. I really miss her...been almost 4yrs since I was in HS..maybe I will call her
tonight and see how she is doing.
I think she is about 50 now...maybe her symptoms have started to wane.
I downloaded the random code by Chip and read the comment at the beginning of the code....but I'm still uncertain about it's randomness.
He certainly seems confident that it is truly random and not like the repeating sequence (however long the sequence) that pseudo-random
generators produce. But the randomness seems to depend on pseudo-random interference to the pll that then is supposed to give rise to
pure randomness....somehow it seems unlikely, not that I would ever imagine I was smarter than Chip because I'm sure I'm not even close
But it seems rather like depending on a random event from a noisy diode that has to be tapped regularly to keep it spewing out static...seems
questionable somehow. Now I understand that small electrical variations in a circuit also might have small effects on the noise from a diode..like tiny
voltage fluctuations..etc It will take someone smarter than me to decide for certain about true randomness in a circuit. Even radioactive decay
might be influenced by external forces we don't understand...and thus be ever so close to true randomness but not quite. Still the random noise
from a diode or radioactive decay might be near enough to still be usable in a study like PEAR....and so might Chip's random generator. I just don't know.
Here is the comment
A real random number is impossible to generate within a closed digital system. This is because there are no │
│ reliably-random states within such a system at power-up, and after power-up, it behaves deterministically. │
│ Random values can only be 'earned' by measuring something outside of the digital system. │
│ │
│ In your programming, you might have used 'var?' to generate a pseudo-random sequence, but found the same │
│ pattern playing every time you ran your program. You might have then used 'cnt' to 'randomly' seed the 'var'. │
│ As long as you kept downloading to RAM, you saw consistently 'random' results. At some point, you probably │
│ downloaded to EEPROM to set your project free. But what happened nearly every time you powered it up? You were │
│ probably dismayed to discover the same sequence playing each time! The problem was that 'cnt' was always │
│ powering-up with the same initial value and you were then sampling it at a constant offset. This can make you │
│ wonder, "Where's the end to this madness? And will I ever find true randomness?". │
│ │
│ In order to have real random numbers, either some external random signal must be input, or some analog system │
│ must be used to generate random noise which can be measured. We're in luck here, because it turns out that the │
│ Propeller does have sufficiently-analog subsystems which can be exploited for this purpose -- each cog's CTR │
│ PLLs. These can be exercised internally to good effect, without any I/O activity. │
│ │
│ This object sets up a cog's CTRA PLL to run at the main clock's frequency. It then uses a pseudo-random │
│ sequencer to modulate the PLL's target phase. The PLL responds by speeding up and slowing down in a an endless │
│ effort to lock. This results in very unpredictable frequency jitter which is fed back into the sequencer to │
│ keep the bit salad tossing. The final output is a truly-random 32-bit unbiased value that is fully updated │
│ every ~100us, with new bits rotated in every ~3us. This value can be sampled by your application whenever a │
│ random number is needed.
>│ reliably-random states within such a system at power-up, and after power-up, it behaves deterministically. │
>│ Random values can only be 'earned' by measuring something outside of the digital system. │
For random earning, what about those old smoke detectors? Didn't the detector use some kind of radioactive isotope emitter that a sensor picked up on... a watchdog, or intensity measurement... when the particles didn't arrive, then smoke, humidity, or burning dinner, set it off.
So, if the sensor were much more sensitive - or the emitter dampened, then it *might* be set to small numbers of particles. Quantum mech. says the emission of a particle from the nucleus of a radioactive isotope is not predictable (within the ranges of selected probablities.) (That's the basis of the Schroedingers Cat thought experiment.)
@Holly, even if the PEAR lab or its experiments were flawed, that schematic you posted in very intreguing... I like your experiment suggestion.
- Howard
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
That is a very good idea about the smoke detector.
I guess it would just be the average length of time between particle detections that would
form the random bit stream...probably would not be anywhere near 50/50 like the noisy diode
but still usable.
Predicting past events isn't very interesting, no matter how purely random the generators of the "predictions" might be. Every day there are countless events occurring in the world, and countless correlations between events. The ability to identify some of them after the fact doesn't require anything outside of our normal ("scientific", "natural") understanding of the world around us.
Reliably predicting _future_ events would be a whole different matter, but "psychics" don't seem too eager to take up that challenge, do they?
In short, I wouldn't be too excited about this research no matter how good the random number generators were, unless the giant methodological flaw were fixed. Put together a protocol in which "dramatic world events" are reliably predicted _before they happen_ and I'll start paying attention.
Pardone moi, just wanted to barge in and say... THANK YOU CRP for this link: http://www.nct.anth.org.uk/
There is a long story behind this - but the whole reason I got into using the Basic Stamp is because of my research into "Folding Space", or "Warp" possibilities - I was needing to create some complex interactions between magnetic fields in multiple coils to model a concept, and I needed to use a microcontroller to coordinate and adjust the field pulsing. In between my various oddball experiments, I research any *reasonable* theories out there.
Has anyone looked over the original Alcubierre metrics? Or the modified field that came out of that?
Dave
One of the leading theories as to what creates a warping of space is that particles of a particular spin actually create a tiny unit of space around themselves, and that the interlinking of these microspaces creates what we perceive as space. So we don't need a lot of gravity - we need a lot of whatever it is that warps space and gives rise to gravity.
As best as I understand, anyway...
The LHC, which is now scheduled to be run up in October, might shed some light on what it is that gives matter mass, and will hopefully answer such seemingly mundane questions as What is inertia? and why DOES, in fact, an object in motion tend to remain in motion unless acted upon by an outside force? These seem like obvious questions, but at the moment, we do not have an explanation for this at the quantum level.
Dave
>│ reliably-random states within such a system at power-up, and after power-up, it behaves deterministically. │
>│ Random values can only be 'earned' by measuring something outside of the digital system. │
For random earning, what about those old smoke detectors? Didn't the detector use some kind of radioactive isotope emitter that a sensor picked up on... a watchdog, or intensity measurement... when the particles didn't arrive, then smoke, humidity, or burning dinner, set it off.
So, if the sensor were much more sensitive - or the emitter dampened, then it *might* be set to small numbers of particles. Quantum mech. says the emission of a particle from the nucleus of a radioactive isotope is not predictable (within the ranges of selected probablities.) (That's the basis of the Schroedingers Cat thought experiment.)
This and other mentions in this thread brought this to mind! Is not "truly random" just another word for what we can not digitally measure yet!
Wouldn't the understanding of anything, bring it's relivance below the Line of "random-psychic-paranormal- or spritual"
·Lets face it, the very instant man is smart enough to understand and measure and duplicate something, it gets sold at K-mart for $19.99 and random then becomes his next achievment.
As digital becomes infinately closer to DC current then random appears, and when digital gets it's next nanowatt upgrade, we will push random up to another value and give it also a name worthy of a dieity!
Though the working title is about 'folding space', I am actually more interested in how parallel lines can be seen to converge. It is all about using the geometry to navigate. When we drive on the freeway, we presume everyone will stay in their lane. Our assumption is that everyone accept a parallel route. But if there are conditions that cause two parallel vectors to converge, there is an opportunity to take a short cut to a destination. And if there is an alternative view that says no parallel vectors exist, we might be in a safe mode to go really fast.
So you see, I am really taking a navigators point of view. Putting together black holes is not really a practical approach to inter-galactic travel, is it? A pragmatic revision of geometry may get us to the next star at nearlight speed or maybe faster.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Ain't gadetry a wonderful thing?
aka G. Herzog [noparse][[/noparse] 黃鶴 ] in Taiwan
The PEAR people also claimed that individuals could influence the randomness of their RNGs, via some sort of interaction between consciousness and quantum events. A spin-off from PEAR is this company that sells their RNGs with software so that users can test their "mind over matter" abilities for themselves:
www.psyleron.com/
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
If they had an interface that could respond to simple thought, we'd all be using it with our projects here, and avoiding the hassle of having to cut holes in project boxes for the on-off switches. What they've got, you can be sure, is a device that works (fairly) randomly, and users who reliably convince themselves that they see patterns in the randomness. Human beings are really good at that.
The recent arguments over the use of BPA plasticizers is an example of this methodology "gap". The standard way to test such chemicals is to increase the amount being given to test animals or cultures until the test subjects die, then back off to establish maximum safe levels. That works for most toxins. The trouble is that hormone-like substances like BPA have another "knee" in their toxicity curve at very low doses where they have profound effects on development in immature animals at specific times in development and relatively little effect on adult animals. The accepted methodology for testing potential toxins misses this effect entirely. That doesn't mean the problem isn't there. It just means that we don't see it with the tools at hand.
Post Edited (Mike Green) : 6/23/2009 2:34:09 PM GMT
The measurement issue also seems to run the other direction.·It is not scientific experimentation that lacks sensitive, well-defined measurement: it's everything else. By any reasonable standard, it is·generally when measurement issues are ignored that positive psychic results are "found". Just as absence of evidence is not evidence of absence, the fact that a method fails to find an effect is not evidence that the method is insensitive. It may well be that there is no effect to be found.
I do personally think that there is very likely is something to acupuncture, in the sense that it is probably effective for some things in some circumstances. But what will convince me will be evidence generated by methods that eliminate alternative hypotheses (most notably random error), not self reports devoid of methodology. The fact that there are potential mechanisms, as you already pointed out,·helps a great deal as well.
Re. clairvoyance, what distinguishes non-reproducible "hits" from mere chance? If I flip a coin and call "heads" and it lands heads, is that a clairvoyant ability? If I flip it 100 times and get it right 50 of those times, is that a 50% reproducible clairvoyant ability? In my book, it's a 100% failure. Now, I do recognize that there might be a real effect that occurs only under certain circumstances, and we don't yet know what those circumstances are. Perhaps someday we'll be able to specify conditions for testing under which the effect IS reproducible. But until we can do that, the effect is indistinguishable from cherry-picking of random events, and because we do know that to be a very common occurrence, it's easily the better explanation.
Finally, if the effects are so small that they require sensitive random number generators and millions of trials to be detected, they most certainly are NOT the same kinds of effects that people not using such devices and protocols claim to detect through casual observation. You may get evidence of something, but it's not the same "something" that psychics are claiming.
Post Edited (sylvie369) : 6/23/2009 5:00:16 PM GMT
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
Pooling data after the fact for the purposes of finding "significant" results is just a fancy way of cherry-picking. It _might_ serve as a basis for identifying new things to study, but it does not produce evidence in and of itself. This is a problem that people doing real science have to deal with. Take a look at this page about metaanalysis:
http://en.wikipedia.org/wiki/Meta-analysis
In particular, notice the comments about "incorporation criteria" and "The File Drawer Problem".
en.wikipedia.org/wiki/Law_of_Truly_Large_Numbers
Meta-analysis was the word I was trying to remember. It was used extensively by the PEAR people. I should use the past tense as PEAR was disbanded a few years ago; Princeton found its presence on their campus rather embarrassing.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
Diaconis is a very interesting character. Check out this article, and especially his analysis of the fairness of a coin toss:
http://news-service.stanford.edu/news/2004/june9/diaconis-69.html
Re. Meta-analysis, did the PEAR people identify _in advance_ criteria for selection of studies? Did they make an effort to include a representative sample of studies? My suspicion, obviously, is that they simply chose sets of studies in order to "find"* significant results.
* "Manufacture" would be the more appropriate word.
"""
For a simplified example of the law, assume that a given event happens with a probability of 0.1% in one trial. Then the probability that this unlikely event doesn't happen in a single trial is 99.9% = 0.999.
In a sample of 1000 independent trials, the probability that the event doesn't happen in any of them is 0.9991000, a probability of 36.8%. The probability that the event happens at least once in 1000 trials is then 1 − 0.368 = 0.632 or 63.2%!
This means that this "unlikely event" has a probability of 63.2% of happening if 1000 chances are given. In other words, even given a highly unlikely event, the chance that it never happens, given enough tries, is even more unlikely.
"""
Therefore we must discard the vast majority of the results of quantum mechanics, which is founded on statistics with *vastly* larger numbers than these. Look at quantum well phenomena, or super cooled phenomena. In fact, if we accept that the above is scientifically significant, then the junctions in microchips *can not function* because >%99 of the electrons will fly off to some other galaxy with only 1% at the P-N line. Therefore, the Propeller is an illusion.
True science strives to be 'fair and balanced' in its objectivity...
cheers.
- Howard
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
I'd never heard of Diaconis before. I have heard of Mosteller, of course. Richard Wiseman, a professor of psychology here in the UK, started off as a magician, as well. He spends a lot of time debunking paranormal studies:
www.richardwiseman.com/biography/biog.html
No, the PEAR people didn't do any of that stuff. They don't appear to have been selective, though. On the contrary, they lumped together lots of disparate studies.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
Post Edited (Leon) : 6/23/2009 6:05:54 PM GMT
Leon (and Slyvie too) ... not trying to raise your ire here, but this is what 'accepted' medical research does all the time. (So too standard psychological research.) It's called "clinical trials":
" As positive safety and efficacy data are gathered, the number of patients is typically increased. Clinical trials can vary in size from a single center in one country to multicenter trials in multiple countries. " - wiki, " clinical trial ".
So I guess with quantum mechanics, we have to throw out medicine too?
Cheers
- Howard
(a skeptic too)
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Homeopathy is one example ... The explanation for it doesn't make sense in terms of how the world seems to work, but there are studies showing a clinical effect not very different from the (small) clinical effect from conventional medicines. If the claim is that the effect of homeopathic treatment is too small ... the treatment effect of conventional medicine is similar ... should both be discarded as ineffective? Why is it that only the "poor cousin" of homeopathy is rejected out of hand? It's mostly about money. There's little money to be made from producing homeopathic remedies and the explanation given for how they work doesn't make sense in conventional biology / chemistry.
There's good evidence that homeopathy was clinically helpful in the treatment of the 1918 Influenza epidemic while conventional medical treatments (as limited as they were at the time) were not.
My point is that, rather than rejecting anomalous results out-of-hand, we should be saying to ourselves "That's interesting. Something strange seems to be going on here. What can we learn from it? Is there somewhere where our knowledge of the universe is not as complete as it should be? What can we learn from this?" At worst, we'll find an explanation for a phenomenon that makes sense at the fringes of our knowledge, but is at best a toy given our current state of engineering knowledge. It wouldn't be the first time something like that has happened. At best, we might have some kind of fundamental breakthrough, perhaps in a direction unexpected at the beginning of the investigation.
RE: homeopathy - this is a good example of something that is, as you say, inexplicable yet works - and is something that people would love to 'debunk' as the efficacy of these medications does not fit the standard models of physics or chemistry. Yet there is an astounding amount of solid, clinical use --- in Europe more than America, and in Germany especially. To those who thiink it's bunk, because it 'hasn't worked for them, or someone they know' that's because either the medical practitioner didn't find the correct medicament, or - far more frequent - because the patient did not consistently follow the protocol. Of course, these things, like anything else can only be used where there is reasonable grounds for it working.
> That's interesting. Something strange seems to be going on here. What can we learn from it? Is there somewhere where our knowledge of the universe is not as complete as it should be?
Well said. Isn't that what science is really all about?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Meta-analysis is accepted as appropriate for analyzing some medical data, it's simply not appropriate for the type of experiments conducted by the PEAR people.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
Homeopathy has never been shown to be superior to treatment with a placebo in a properly conducted trial, AFAIK. How can administering water to a patient with not a single molecule of an active substance possibly have an effect on someone, even if it is shaken by a machine or a person at each dilution?
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
Post Edited (Leon) : 6/23/2009 7:19:32 PM GMT