Posts: 1,138
Threads: 5
Likes Received: 112 in 79 posts
Likes Given: 57
Joined: Nov 2018
Reputation:
9
11232020, 08:02 PM
(This post was last modified: 11232020, 10:58 PM by SteveII.)
Good Arguments (Certainty vs. Probability)
(11202020, 11:54 PM)Peebothuhlu Wrote: (11192020, 07:17 PM)SteveII Wrote: ... A ton of stuff....
Hi Steve!
So, you're in the camp of 'Tine tuned'. Okay, cool.
Here's a youtube vid from some one actually learning about and getting ready to be a profesional in the biological sciences.
The person he's reveiwing is a respected sceintist (James Tour) who's work in nanotech (I think?) is widely respected BUT, however, still holds to the 'Must have been made' stance such as Dembski and others.
Hope you find the review informative.
Cheers,
Not at work.
So, you respond to a logical form discussion on the Finetuning of the universe argument (definitely not biology) by providing a 1+ hour Youtube critique on an Intelligent Design (a biological argument) of a nonbiologists by a biologist? What point are you trying to make?
Posts: 119
Threads: 3
Likes Received: 193 in 75 posts
Likes Given: 41
Joined: Feb 2020
Reputation:
13
11232020, 11:09 PM
Good Arguments (Certainty vs. Probability)
(11232020, 07:10 PM)SteveII Wrote: (11202020, 12:00 AM)Reltzik Wrote: (11192020, 07:17 PM)SteveII Wrote: Your lottery example has problems. To be analogous to the beginning of the universe, the probability of cheating cannot be calculatednot simply difficult, actually impossible. Someone already won the lottery. You cannot go back and assess the possible ways/opportunities people could have cheated and come up with a probability. That assessment would be based on a priori knowledge of the process that produced the result you are examining.
That was more an example by which we would know, from the outside illustration, that the reasoning was horribly flawed, and the people inside the example needn't know how secure the systems were at all.
But okay, I'll play along. Let's say we have no information about how difficult the security measures are to beat. If the odds of a successful cheat of the system are much greater than a chance win, 1e2 for example, then design would be the reasonable inference, and if they were much lower, let's reuse 1e12 for the example, then chance would be the reasonable inference. But we don't actually know if the chance of a win by design is a lot higher, a lot lower, or fairly comparable to the odds of a win by chance. How can we compare the two probabilities if we don't have the second one, as you insist we can't? And if we don't know this probability, then why would someone arguing that the winner cheated just automatically assume that a cheat was the reasonable inference? There are two other options (similar probabilities and probabilities vastly in favor of a fair win) on the table, and now after this restriction of knowledge about the odds of beating security we have no way of knowing which of the three options are more likely. On what basis do we favor the one above the other two, other than preexisting bias? This step is missing.
I am not saying your reasoning is flawed. I am saying that you are focusing on why a particular person won (the odds are that someone willotherwise no one would buy the ticket). The Teleological argument is focusing on why anyone wongiven the staggering odds that no one should have. It is vastly more probable that a lifeprohibiting universe would exists. That's why I gave the better analogy if the ball drawing (I'll pick this up below).
It's the odds that this universe won. (For certain values of "winning".)
(11232020, 07:10 PM)SteveII Wrote: Quote: (11192020, 07:17 PM)SteveII Wrote: The problem seem to be you have applied a counter argument for Intelligent Design in Biology to the question of the initial conditions of the universe. The two are fundamentally different. The amount of information you need to assess what evolution can or cannot do is staggering but you can get to work on it. The Teleological Argument addressed only a dozen or so constants that could have had other values but turned out to be just so. There is no process to examine.
The finetuning argument addresses those (or at least tried to). We could treat the fine tuning argument as a variation on the teleological argument or an example of a larger class of teleological arguments, but just as finetuning was around for decades before specified complexity was thrown into the mix, the teleological argument was around for millennia before the constants you're talking about were identified. I guess you're conflating these two as well, and I'll try to keep it in mind going forward.
You fairly criticized the lottery example for assuming apriori knowledge, but the finetuning argument does the same and more. It isn't just making apriori assumptions about the odds that a designer would set the constants this way, it also makes bald assertions that the constants ending up this way by chance are unlikely.
I don't know which list of a dozen or so constants you're talking about, and there have been so many such lists produced (many of which are absurd in their overeagerness to list anything that they'll throw in items that clearly aren't required for intelligent life or which are essentially restatements or implications of other items on the list) that I won't even try to google around to find which one you're talking about. So I'll just pick the gravitational constant as an oftcited example.
We can have an interesting argument about how finelytuned this constant actually needs to be for some form of intelligent life to arise, but for the sake of argument I'll assume it needs to be a narrow range of values. (And no, I won't define "narrow", because that's a huge mathematical can of worms and that ball is definitely not in my court.) What are the ACTUAL odds that a universe that came about by chance would get a gravitational constant in that narrow range of values?
We don't know this value. How would we even find it? We can't take a frequentist approach, run hundreds of experiments of creating a universe, and statistically estimate the odds of getting that constant. We can't take the classical approach because about the only thing we can possibly know about the probability distribution is that it's not uniform. (Uniform distribution across the range of all real numbers is mathematically impossible.) That leaves Bayesian probability (slightly related to, but distinct from, the Bayesian calculations I was using earlier... yes, some mathematicians get multiple things named after them and, yes, it is annoying). Bayesian probability is basically just a subjective gutcheck of how likely we think something is, and it's less an unbiased look at things and more a way of quantifying someone's bias. It just can't do the job of proving objective facts, unless they're objective facts about how likely people think something to be. I wouldn't list it as an option at all, except (A) it's the only one left and (B) it seems to be what the people advancing this argument are using.
I am getting the feeling that you are not entirely familiar with the scientific basis for the denial of Chance in P2.
There are several things cited as scientific basis. I've encountered apologists who try to doublebill (citing as separate constants multiple items that are all arising from the same source), who list things that are not constants, who identify items unique to us here on Earth as features of the whole universe, and who list things that aren't required for life. It's generally a handful of "maybe there's something here" buried in a truckload of bullshit, and the truckload of bullshit tells me that collectively the people arguing for "maybe there's something there" can't be trusted to both tell the difference and represent it honestly. And the truckload of bullshit getting in the way makes it difficult to give close examination to the few items that seem interesting on first blush.
Regardless of how they behave, it wasn't that I didn't know what some people cite as the basis for denying Chance, but rather that I didn't know what YOU were citing as the basis for the denial of chance.
(11232020, 07:10 PM)SteveII Wrote: It is not simply a "narrow range" lending itself to lottery analogies. By any reasoning, these are probabilities in the 'incredulous range.' So as not to introduce any of my own bias, I will clip sections from the Wikipedia Article in FineTuned Universe (https://en.wikipedia.org/wiki/Finetuned_universe)
Wikipedia Wrote:[from the intro] The characterization of the universe as finely tuned suggests that the occurrence of life in the Universe is very sensitive to the values of certain fundamental physical constants and that the observed values are, for some reason, improbable.[1] If the values of any of certain free parameters in contemporary physical theories had differed only slightly from those observed, the evolution of the Universe would have proceeded very differently and life as it is understood may not have been possible.[2][3][4][5]
[from Motivation] The premise of the finetuned universe assertion is that a small change in several of the physical constants would make the universe radically different. As Stephen Hawking has noted, "The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and the electron. ... The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life."[5]
The precise formulation of the idea is made difficult by the fact that physicists do not yet know how many independent physical constants there are. The current standard model of particle physics has 25 freely adjustable parameters and general relativity has one additional parameter, the cosmological constant, which is known to be nonzero, but profoundly small in value.
[from Examples]
Martin Rees formulates the finetuning of the universe in terms of the following six dimensionless physical constants.[2][15]
• N, the ratio of the electromagnetic force to the gravitational force between a pair of protons, is approximately 1036. According to Rees, if it were significantly smaller, only a small and shortlived universe could exist.[15]
• Epsilon (ε), a measure of the nuclear efficiency of fusion from hydrogen to helium, is 0.007: when four nucleons fuse into helium, 0.007 (0.7%) of their mass is converted to energy. The value of ε is in part determined by the strength of the strong nuclear force.[16] If ε were 0.006, only hydrogen could exist, and complex chemistry would be impossible. According to Rees, if it were above 0.008, no hydrogen would exist, as all the hydrogen would have been fused shortly after the Big Bang. Other physicists disagree, calculating that substantial hydrogen remains as long as the strong force coupling constant increases by less than about 50%.[13][15]
• Omega (Ω), commonly known as the density parameter, is the relative importance of gravity and expansion energy in the universe. It is the ratio of the mass density of the universe to the "critical density" and is approximately 1. If gravity were too strong compared with dark energy and the initial metric expansion, the universe would have collapsed before life could have evolved. On the other side, if gravity were too weak, no stars would have formed.[15][17]
• Lambda (Λ), commonly known as the cosmological constant, describes the ratio of the density of dark energy to the critical energy density of the universe, given certain reasonable assumptions such as positing that dark energy density is a constant. In terms of Planck units, and as a natural dimensionless value, the cosmological constant, Λ, is on the order of 10−122.[18] This is so small that it has no significant effect on cosmic structures that are smaller than a billion lightyears across. If the cosmological constant were not extremely small, stars and other astronomical structures would not be able to form.[15]
• Q, the ratio of the gravitational energy required to pull a large galaxy apart to the energy equivalent of its mass, is around 10−5. If it is too small, no stars can form. If it is too large, no stars can survive because the universe is too violent, according to Rees.[15]
• D, the number of spatial dimensions in spacetime, is 3. Rees claims that life could not exist if there were 2 or 4 dimensions of spacetime nor if any other than 1 time dimension existed in spacetime.[15] However, contends Rees, this does not preclude the existence of tendimensional strings.[2]
Carbon and oxygen
An older example is the Hoyle state, the thirdlowest energy state of the carbon12 nucleus, with an energy of 7.656 MeV above the ground level.[19]:125–127 According to one calculation, if the state's energy level were lower than 7.3 or greater than 7.9 MeV, insufficient carbon would exist to support life. Furthermore, to explain the universe's abundance of carbon, the Hoyle state must be further tuned to a value between 7.596 and 7.716 MeV. A similar calculation, focusing on the underlying fundamental constants that give rise to various energy levels, concludes that the strong force must be tuned to a precision of at least 0.5%, and the electromagnetic force to a precision of at least 4%, to prevent either carbon production or oxygen production from dropping significantly.[20]
Dark Energy
A slightly larger quantity of dark energy, or a slightly larger value of the cosmological constant would have caused space to expand rapidly enough that galaxies would not form.[21]
Now that you are actually citing values, I can actually address a value you're citing and maybe that will get the point across better than the example with the gravitational constant. I'll pick on the measure of nuclear efficiency, because that's the first one on the list with numbers for the range of values. I'll be charitable and take the narrower range of (0.006,0.008), even if the scientific consensus would suggest a broader (0.006, 0.0105). I suspect your same objections to chance would be there in either case, and I know my same objections to your argument are there in either case.
Yes, I'm calling this a narrow range and no, I am still not giving a precise definition for what is and isn't a narrow range. You call this an incredulous range. But how incredible is this, really? In the redball analogy, this is one of the redballs. But how low were the odds that we'd get a red ball in this particular case? What were the ODDS that we'd get some value inside this narrow interval, versus some value outside of it?
More to the point, what methodology are you using to determine what those odds were? By what means do we know the odds of landing on this particular range were, say, less than 50/50? My position is that without additional data there exists no sound mathematical or statistical methodology for identifying the probability that we would arrive at this range by chance, so just cite how it was done and (if it's not just more apologist bullshit) we can move on from this point. You clearly think the odds of hitting this range are far less than 50/50, so low that you label it incredulous. How did you conclude that?
(11232020, 07:10 PM)SteveII Wrote: Regarding Bayesian probability, I agree that that is applicable here. You are comparing probabilitiesnot trying to solve for them. Basically P(OrderDesign) > P(OrdernotDesign). The argument shows that P(OrdernotDesign) is so low as to be approaching zero. P(OrderDesign) is bolstered by the fact that lifepermitting checks the box for specified complexity.
Putting a pin in our argument about specified complexity for a moment to say, no, that's not enough. Just saying that P(OrderDesign) would be bolstered by it is not enough to say that P(OrdernotDesign) would be the lower of the two, even if we grant the specified complexity argument.
But more to the point, those are the wrong probabilities to be looking at. The goal of the argument is to arrive at some high value or range for P(DesignOrder) or P(NotdesignOrder). Moving to these from P(OrderDesign) or P(OrderNotdesign) requires knowledge of both P(Order) and P(Design) under Bayes Theorem. Or can you cite another way of reversing the order of the conditioning?
(11232020, 07:10 PM)SteveII Wrote: Quote:So take my lotto example, and in addition to removing all knowledge about how likely it is for an attempt to cheat to succeed also remove all knowledge of how likely it is for a given ticket to win by chance. All we know is that at least one person won. On what basis do we conclude that this winner cheated?
(11192020, 07:17 PM)SteveII Wrote: A more analogous counterexample will illustrate: There is a drawing of identical pingpong balls by a machine that works flawlessly to randomize them. To keep the numbers similar, lets say there are 1,000,000 identical balls except there are 999,999 white balls and one red one. The odds of a red call coming down the chute is oneinamillion. The first drawing, a red one comes down. Improbable, but reasonable. You do the drawing 11 more times and red comes down every time. No other information can be knownonly the probability of chance 1:1000000000000 (and 60 more zeros). Is it reasonable to infer something is fixed?
I'd add to that the possibility of the machine being broken and a few other options, but that it's somehow not behaving according to raw chance? Yeah, that's a reasonable inference. Now if we don't know apriori how many balls of what colors are in the machine, and we get 12 red balls, why would we assume that the odds of the red ball popping out are 1e7 as opposed to 1e12 or 1e1 or 9.999997e7?
Are we allowing apriori knowledge here, or not? Or is this another of those apologist games where you demand that I have to play by restrictive rules and then you just ignore the rules all you want? Because that sort of bottomless dearth of even the pretense of integrity really pisses me off.
To continue from my first comment above, it is not why someone won, it is why anyone won. In a lottery you always have the "someone has to win."
Again, I'm coming at this from the position of the Universe won (in the sense that "winning" means that it turned out to be such a universe as could permit life).
(11232020, 07:10 PM)SteveII Wrote: Quote:Complexity, as Dembski defined it, wasn't really about the length of the sequence, but the improbability of getting that particular sequence. The two are linked, of course, but if we imagine that the bag of scrabble tiles contained a million E tiles and one Q tile and nothing else, getting a thousand E tiles in a row would not be complex in the same way as drawing the much shorter quote from the Declaration would be from a bag with letters in the proportions normal to a scrabble game. So since we're now talking about finetuning constants, how are you determining that the letters in the bag were in proportions that are default for a scrabble game? Or do you think you just get to assume that apriori on no basis whatsoever?
There is no reason to think there was a control on the constants (a tile bag). There is no reason to think that every possible value was equally probable.
That's exactly the point I was making. There is no reason to think that every possible outcome was equally probable. (Though, to be fair to the metaphor, any possible weighting of probabilities could be mapped to some proportions of tiles in a tile bag that could be within some arbitrary tolerance of matching that weighting.) Similarly, there's no reason to think that every possible range for nuclear fusion efficiency is equally probable. And yet (the Creationist version of) specified complexity hinges on identifying this particular outcome as being lowprobability (that's the complexity part), implying some knowledge of the probabilities involved.
So I'll put the key questions to you again: What was the incredulouslylow probability that, left to chance, the efficiency of nuclear fusion would have fallen outside of (.006, .008), and by what means did you arrive at that probability?
And you know what, scratch that. I'll be far more generous. You said almost a dozen constants. If there were a dozen constants, each with odds of hitting their necessary range being less than 95%, and they were independent, then the odds of simultaneously hitting each of them independently would be less than 55%. I don't know what how low a probability is required for you to affix the term "incredulous" to it, but I imagine it has to be far lower than 55%. So 95% seems like a very generous spot to place the bar. Therefore, I'll accept any valid statistical methodology demonstrating the probability of nuclear fusion efficiency falling in the (alsogenerous) interval of (0.006,0.008) by chance to be less than 95%.
Again, my position is that this probability is incalulable with just the data we have. Not incalculably high or incalculably low, but something that can't be calculated at all. Someone with Dembski's mathematical background would damn well have known that, which is why I view his contributions to the field of apologetics (and that field's embrace of them, rather than tossing them out) as another piece of evidence that the field of apologetics is bullshit, apologies are about getting credulous individuals into buying a conclusion by deceit rather than on any reasonable basis, and apologists should not be trusted. Show me that this probability CAN be shown to be at most 95%, and I'll be forced to reconsider this piece of evidence.
"To surrender to ignorance and call it God has always been premature, and it remains premature today."  Isaac Asimov
