Welcome to Atheist Discussion, a new community created by former members of The Thinking Atheist forum.

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Good Arguments (Certainty vs. Probability)

Good Arguments (Certainty vs. Probability)
(11-11-2020, 10:28 PM)Bucky Ball Wrote: ...In order for a "design" argument to work, you FIRST must establish that there is a possible designer, otherwise it's not a valid inference, just a made up pile of rubbish that come from religious presupposition, and is no more valid than saying a 4th possibility is the Pink Sparkly Unicorn.

Ain't it funny that every theist throughout history has failed dismally each and every time they've been
asked—quite reasonably—to provide even one iota of evidence supporting their notion that supernatural
entities could exist in real life?

And also that none of them have ever attempted to prove to me unequivocally that leprechauns do not exist.

—I'm betting Steve won't have the cojones to take on that challenge either LOL.
I'm a creationist;   I believe that man created God.
Reply

Good Arguments (Certainty vs. Probability)
(11-09-2020, 09:38 PM)SteveII Wrote:
(11-09-2020, 07:02 PM)Reltzik Wrote:
(11-09-2020, 05:55 PM)SteveII Wrote: You missed my point and you are not addressing the logic of the argument. The probability of Design on its own is irrelevant. At question is probability of the other two alternatives: Chance and Necessity. When these fail to give an account for what we see, Design is INFERRED because those are the only three possible alternatives. It is the same logic if you happen upon a perfect 1m polished glass sphere in the woods. You do not consider the actual probability of of such an object (presence of silica sand, sodium oxide, etc., sufficient heat, geological history, polishing agent, some theory on what could have governed the size, etc. etc.). You consider the probability of Chance. When that does not satisfy, you walk away wondering who put it there.

The way to defeat the argument is to make Chance or Necessity more probable or Design impossible.

Bucky could never make the argument he claims. He thinks insults and links are his best route to proving a point he rarely understands.

Yes, that is EXACTLY the logic of the argument: inferring that it must be design on the grounds that the other two are unlikely, and making that inference WITHOUT actually establishing ANYTHING about how probable design is in comparison to the other two, because when have Christian apologists ever possessed a shred of the integrity required to hold their favorite answer to the same standard as the alternatives?  I am addressing the logic of the argument by showing it's missing a piece needed to be cogent, and you're saying I'm not addressing the logic of the argument on the grounds that I'm not skipping the same step that it does.  WTF?

Does the over-the-top rhetoric do something for you?

For reference:
P1. The fine-tuning of the universe is due to either physical necessity, chance, or design.
P2. It is not due to physical necessity or chance.
C. Therefore, it is due to design.

P2 does not conclude "unlikely" as your argument needs. It concludes not plausibly due to chance or necessity. If two of three options are implausible, the third becomes more plausible as a result of the failures of the other two. But you are not arguing the probabilities of chance and cosmological constants. You are arguing:

P1'. The fine-tuning of the universe is due to either Chance or Design.
P2'. Design is not probable
C'. Therefore Chance.

This is my third attempt at writing a civil response to this, one in which I maintain an open mind and allow that your gross mischaracterization of my position should not be ascribed malice, but instead ascribed to something other than malice.  (Even though I think my charity is a sign that I myself am something other than malicious.)

I am not arguing for what you are describing me as arguing for.  Not in the slightest.

What you describe as my P2' is not a position I am advancing, and not a position that I hold, and not a position that I am basing an argument on.  If anything, I do not believe that there exist enough data to even estimate the probability of design.  (I do maintain that if we were to restrict the discussion to a choice between a perfect designer or no designer, and propose a specific goal that the universe was being designed for, we could then gauge likelihoods.  For example, a perfect designer wishing to create a habitat for humans is extremely unlikely to have created a universe like ours, where nearly the entirety of its span results in a horrific gasping death for any humans that find itself in it.  But that's an aside, since it deals with a pair of assumptions not present in the Teleological argument.)

My conclusion is not, as you depict, that the universe is the result of chance.  Rather, my conclusion is that the Teleological argument is unsupported in its second premise.  In terms of a deductive argument, it is valid but not sound.  I describe this second premise as the core of the argument, and allude to some apologetic attempts to establish it.  I dismiss those attempts as so vastly insufficient on the grounds that they focus entirely on trying to calculate a low probability for chance, without giving a flying fuck about how probable or improbable design is and in an apparent total ignorance of how conditional probability works.  (Which, to be fair, a lot of people don't know, and that's not a bad thing in a moral sense.  It's so useful to life that they probably should know it, but that's a pragmatic should rather than a moral should.  The moral failing is their unrivaled arrogance in proclaiming themselves authoritative experts on the origins of the entire universe on a foundation of that gaping chasm of ignorance.)

So, a fair, accurate, non-malicious depiction of my position regarding the Teleological argument would have read as:

P1':  If the premises of an argument are not adequately established, then the conclusion of the argument is also not established.  (At least, not by this particular argument.)
P2':  P2 of the Teleological argument is not adequately established.  (This is where I got into conditional probabilities as a way of addressing a few of the more common ways apologists feign to establish P2.)
L':  The Teleological Argument does not establish the conclusion of a designer's existence.

Note that I marked that as L' rather than C'.  That is because it is a lemma in a larger argument rather than the overall conclusion.  I then characterized the brazenness of the failures to establish P2 as something which could not be adequately attributed to stupidity, at least in the case of someone like WLC.  I marked this as part of a larger pattern within the field of apologetics, which I supported with a few other examples, and went on to express a very strong opinion that this was likely the result of deceptive conduct and deep character flaws on the part of the apologists.  But you wanted to focus on the Teleological Argument part, so let's file the rest of that away for another time.

(11-09-2020, 09:38 PM)SteveII Wrote: Your error is that design is a property that is always inferred from two things: a) the presence of specified complexity (mentioned in P1') --not just complexity and 2) eliminating other possible causes.

No.  Design is not always inferred from specified complexity and the elimination of other possible causes  Specified complexity is often used as an attempt to support P2, but it is far from the only basis for it.  The Teleological Argument is ancient, and apologists were concluding design from it centuries before Creationists ever attempted a justification through specified complexity.  (I'll leave aside quibbles about whether his Intelligent Design position for a designer for biological systems counts as the Teleological Argument.  They're not exactly the same thing, but hey, close enough for this discussion.)

Or do you mean instead that specified complexity will always lead one to infer design?  I'm not sure which way you meant "always" in that quote.

Either way, you did NOT mention specified complexity in P1'.


Also, by your own words, you actually have to ELIMINATE chance as a possible cause and combine that, independently, with an observation of specified complexity in order to conclude design, rather than using specified complexity itself as a basis to eliminate chance.  Did you mean to phrase this a different way?

(11-09-2020, 09:38 PM)SteveII Wrote: Your (C') does not follow from the from the premises because you failed to account for design being an inferred property. How do you know that design is not probable without analyzing chance? For P2' to be true, C' would be already assumed. You are question-begging.

I would be, if your something-other-than-malicious depiction of my argument were actually my argument.

(11-09-2020, 09:38 PM)SteveII Wrote: Another analogy to show how the property of design is inferred: A large scrabble tile bag with large equal quantities of letters. Your friend said he drew in order the following:

WEHOLDTHESETRUTHSTOBESELFEVIDENTTHATALLMENARECREATEDEQUAL

Are we to believe him? No, we would not. Why? Because of a design inference due to the specified complexity and the very low probability of chance.  How about if he said he drew these?

XERTYHJKIOSYUIODPHOPOQIJNIBQLDKIJXMFLEKDNEVQKBZVSNWYKERLA

It is equally improbable to have drawn this specific string of letters. But, we would believe him. Why? There is no specified complexity and therefore no inference to design.

Okay, so we're dealing with Dembski's version of specified complexity, rather than the original notion.  Thought as much.

What strikes me about this example is that it is far from a counter of my contention that these inferences are based on other probabilities and likelihoods.  The only reason we would doubt the first string of letters was random is that it seems like something that an intelligent agent familiar with the Declaration of Independence might gravitate towards (which, yes, is your point).  If had absolutely no knowledge of the Declaration or how phonemes in English worked, I might well view this string of letters as unspecified complexity.  Going back to the Bayes, that is P(Order | Design), one of the missing elements I said might make P2 work.  You're not exactly persuading me that this criticism was incorrect by providing a counterexample that isn't a counterexample.

(11-09-2020, 09:38 PM)SteveII Wrote:
Quote:And the glass sphere metaphor (or watch, or message in sand, both common variations of that example) is a horrifically faulty analogy.  We have proofs of concept for glass manufacturing, we know that glass products are artificially produced all the time, we see them at stores everywhere with brand names and trademarks of the manufacturers printed on them, we can take classes to learn how to make them ourselves, and I'm sure if I go online it would just take me a few minutes to find just such a product for sale accompanied by a blurb about who they are and maybe even some pictures of them making the product.  I would draw the conclusion that people likely produced the glass sphere because, yes, I know it's not likely to happen just by chance, but ALSO because I actually have preexisting knowledge that people manufacture glass products.  This knowledge that manufactured glass spheres are more common than naturally-occurring glass spheres is an essential element for making this a strong inference, and that knowledge is exactly what we do not possess about the universe.

(Google quickly found me 1m glass spheres for sale.  Being marketed as decorative objects for hotels.)

To emphasize the discrepancy between examining both possibilities and leaving one unexamined, suppose astronomers discover a sphere of glass 10,000,000 meters across floating in interstellar space somewhere.  Would we infer that this was designed? Well, that's planet-sized.  We don't have experience with manufacturing on that scale, or with putting those into interstellar space.  So far we've put exactly TWO object into interstellar space, and both are considerably smaller and less massive.  That's beyond our capabilities.  Why would we go to the effort of making it and putting it there?  It doesn't seem to serve any purpose larger than itself.  We certainly didn't make it.  Aliens, perhaps?  Well, maybe, but we've been looking into the void for a while now and have yet to see any signs of an interstellar civilization, and most of the arguments why we might not have seen one don't mesh with them possessing the motivation, manufacturing capacity, and reach to produce glass planets in interstellar space.  So could it be the result of chance?  Well the present scientific understanding of how planets form involves them coalescing due to gravity from dust and gas, turning molten from the heat of the initial collisions, and settling into spherical shapes under their own weight.  Glass is made of silicon and oxygen, both in the top ten most common elements in the universe, so that has a certain plausibility to it.  So for that sphere, I would infer chance natural processes.  Now maybe you could argue the other way, pointing out some explanation for unseen aliens that would involve them producing glass planets, or pointing out some implausibilities in my idea of how a glass planet might have formed like asking why it wasn't an oblate spheroid instead of a perfect sphere.  But that just proves my larger point.  We'd be weighing the likelihoods and unlikelihoods of BOTH possibilities against each other, and doing so on the basis of a preexisting understanding of what each would entail.  Examining the probability of each is required.  We don't just get to single one out and default to it, unexamined, just because the others seem unlikely.

(And yes, I'm conflating "chance" and "physical necessity", since necessity is just chance with a probability of 1 and it's a bit of a mouthful listing both of them every single time.)

So if you want a found-in-the-woods analogy that actually is analogous to the universe, please pick an item for which we have zero established proofs-of-concept of being designed, no proposed methods by which a design might be implemented, and no apparent purpose for designing and producing it in the first place.  You know, like the universe.

Your counter analogy with a glass planet is useful. Such a planet is plausibly due to chance BECAUSE of the existence of undercutting defeaters you articulated against the idea of design. That is why I asked you earlier if you had reasons to think Design was not possible. But, as you increased the specified complexity of your planet-sized glass sphere, chance would be less plausible and design more likely (i.e. unknown groups of repeating letters scribed around the equator).

In that case... and assuming that I could actually establish that those were LETTERS... I'd probably conclude that the glass sphere was natural and later had the letters added by something at least as intelligent as a truck stop graffiti artist.

And in what way does that metaphor apply to the universe as a whole?  It's complex in that it has a lot of details in it, but how is it specified?
"To surrender to ignorance and call it God has always been premature, and it remains premature today." - Isaac Asimov
Reply

Good Arguments (Certainty vs. Probability)
Complexity in and of itself is not indicative of design.  The wheel is one of the least complex designs, yet we can infer design by its regularity.  But a wheel is so simple there probably are examples of natural wheels (and by wheel I don't mean merely circular but is used as a wheel.  Sunflowers are not wheels).

In an earlier post I defined the proof of design as regularity in reproduction, but it occurred to me there's an even more powerful indication of design:  absence of superfluous constituents.  A prototype will start out having components A, B and C, but by the time the device is put into production A and B are gone, C has been transmogrified into D19, and E, F and G have been incorporated.  Little to none of the prototype's elements remain that were found superfluous to the final design.

Not so in natural evolution.  Evolution is not a design process.  When we examine the most deeply complex specimens of biology we find superfluity accounting for a signficant makeup of the whole.  Evolution does not pare its mechanisms down to essentials because evolution is not a consciously directed process.

So, we find a pocketwatch on the beach.  We know it's a designed item NOT because it's complicated.  Instead, we know it's designed by the regularity and uniformity of its constituents AND that it embodies virtually NO superfluous elements.

Next to the watch is a sea slug.  We know its origins are from a fundamentally random succession of change because as we dissect it we find much that is superfluous to its existence.
The following 3 users Like airportkid's post:
  • Bucky Ball, Inkubus, Chas
Reply

Good Arguments (Certainty vs. Probability)
(11-17-2020, 04:23 AM)airportkid Wrote: Complexity in and of itself is not indicative of design.  The wheel is one of the least complex designs, yet we can infer design by its regularity.  But a wheel is so simple there probably are examples of natural wheels (and by wheel I don't mean merely circular but is used as a wheel.  Sunflowers are not wheels).

In an earlier post I defined the proof of design as regularity in reproduction, but it occurred to me there's an even more powerful indication of design:  absence of superfluous constituents.  A prototype will start out having components A, B and C, but by the time the device is put into production A and B are gone, C has been transmogrified into D19, and E, F and G have been incorporated.  Little to none of the prototype's elements remain that were found superfluous to the final design.

Not so in natural evolution.  Evolution is not a design process.  When we examine the most deeply complex specimens of biology we find superfluity accounting for a signficant makeup of the whole.  Evolution does not pare its mechanisms down to essentials because evolution is not a consciously directed process.

So, we find a pocketwatch on the beach.  We know it's a designed item NOT because it's complicated.  Instead, we know it's designed by the regularity and uniformity of its constituents AND that it embodies virtually NO superfluous elements.

Next to the watch is a sea slug.  We know its origins are from a fundamentally random succession of change because as we dissect it we find much that is superfluous to its existence.

Two additional weaknesses in the the design argument :

1. No one is ever able to define or establish what exactly "complexity" is. When you ask them to say what is "too complex" to occur naturally, and needs no designer, they cannot say. 

2. There is nothing in the argument that leads one to a god. Even if one grants that an object or system needs a designer, all that leads to is a proximate cause, or "nearest cause" ... there is nothing in the argument that leads to "ultimate cause". The cause of any observed complex object could be any number of causes, including alien cultures.
Test
The following 3 users Like Bucky Ball's post:
  • airportkid, Peebothuhlu, Inkubus
Reply

Good Arguments (Certainty vs. Probability)
(11-16-2020, 11:59 PM)Reltzik Wrote:
(11-09-2020, 09:38 PM)SteveII Wrote: For reference:
P1. The fine-tuning of the universe is due to either physical necessity, chance, or design.
P2. It is not due to physical necessity or chance.
C. Therefore, it is due to design.

So, a fair, accurate, non-malicious depiction of my position regarding the Teleological argument would have read as:

P1':  If the premises of an argument are not adequately established, then the conclusion of the argument is also not established.  (At least, not by this particular argument.)
P2':  P2 of the Teleological argument is not adequately established.  (This is where I got into conditional probabilities as a way of addressing a few of the more common ways apologists feign to establish P2.)
L':  The Teleological Argument does not establish the conclusion of a designer's existence.

Note that I marked that as L' rather than C'.  That is because it is a lemma in a larger argument rather than the overall conclusion.  I then characterized the brazenness of the failures to establish P2 as something which could not be adequately attributed to stupidity, at least in the case of someone like WLC.  I marked this as part of a larger pattern within the field of apologetics, which I supported with a few other examples, and went on to express a very strong opinion that this was likely the result of deceptive conduct and deep character flaws on the part of the apologists.  But you wanted to focus on the Teleological Argument part, so let's file the rest of that away for another time.

I'm not interested in your opinions on apologist. I am interested in examining your reasoning.

You have listed above the typical response to the argument--denying P2--but that is not what you have been arguing. You have repeatedly said that the proponent of the argument has to engage whether design is probable. But your argument above entirely depends on undercutting P2 -- a premise that does not even mention design. So how do you make the move from the above to

(11-09-2020, 07:02 PM)Reltzik Wrote: Yes, that is EXACTLY the logic of the argument: inferring that it must be design on the grounds that the other two are unlikely, and making that inference WITHOUT actually establishing ANYTHING about how probable design is in comparison to the other two, because when have Christian apologists ever possessed a shred of the integrity required to hold their favorite answer to the same standard as the alternatives?  I am addressing the logic of the argument by showing it's missing a piece needed to be cogent, and you're saying I'm not addressing the logic of the argument on the grounds that I'm not skipping the same step that it does.  WTF?

You are "not addressing the logic of the argument." It seems if you actually inserted the 'skipped step', it would be a question-begging argument. Inferring something from valid premises is the way an argument works--especially when something like Design is a properly inferred property. For your 'critique' to be successful, you would have to show that design is not properly inferred through reasoning. You don't show it, you asserted it.

Quote:
(11-09-2020, 09:38 PM)SteveII Wrote: Your error is that design is a property that is always inferred from two things: a) the presence of specified complexity (mentioned in P1') --not just complexity and 2) eliminating other possible causes.

No.  Design is not always inferred from specified complexity and the elimination of other possible causes  Specified complexity is often used as an attempt to support P2, but it is far from the only basis for it.  The Teleological Argument is ancient, and apologists were concluding design from it centuries before Creationists ever attempted a justification through specified complexity.  (I'll leave aside quibbles about whether his Intelligent Design position for a designer for biological systems counts as the Teleological Argument.  They're not exactly the same thing, but hey, close enough for this discussion.)

Or do you mean instead that specified complexity will always lead one to infer design?  I'm not sure which way you meant "always" in that quote.

Either way, you did NOT mention specified complexity in P1'.

"The fine-tuning of the universe" is a clear reference to specified complexity in P1. It is a fact that the difference between a universe that holds together and one that does not depends on the narrow possible values of a remarkable number of variables. Specified complexity is not used to support P2. If one is to object to the argument based on the universe is not finely tuned, you would be objecting to P1.

You said "Design is not always inferred from specified complexity and the elimination of other possible causes". Well those two components are both necessary and sufficient for a design inference. You have argued other ways to infer design but they are neither necessary nor sufficient.

Quote:Also, by your own words, you actually have to ELIMINATE chance as a possible cause and combine that, independently, with an observation of specified complexity in order to conclude design, rather than using specified complexity itself as a basis to eliminate chance.  Did you mean to phrase this a different way?

I'll amend to "eliminate as a reasonable explanation."

Quote:
(11-09-2020, 09:38 PM)SteveII Wrote: Another analogy to show how the property of design is inferred: A large scrabble tile bag with large equal quantities of letters. Your friend said he drew in order the following:

WEHOLDTHESETRUTHSTOBESELFEVIDENTTHATALLMENARECREATEDEQUAL

Are we to believe him? No, we would not. Why? Because of a design inference due to the specified complexity and the very low probability of chance.  How about if he said he drew these?

XERTYHJKIOSYUIODPHOPOQIJNIBQLDKIJXMFLEKDNEVQKBZVSNWYKERLA

It is equally improbable to have drawn this specific string of letters. But, we would believe him. Why? There is no specified complexity and therefore no inference to design.

Okay, so we're dealing with Dembski's version of specified complexity, rather than the original notion.  Thought as much.

What strikes me about this example is that it is far from a counter of my contention that these inferences are based on other probabilities and likelihoods.  The only reason we would doubt the first string of letters was random is that it seems like something that an intelligent agent familiar with the Declaration of Independence might gravitate towards (which, yes, is your point).  If had absolutely no knowledge of the Declaration or how phonemes in English worked, I might well view this string of letters as unspecified complexity.  Going back to the Bayes, that is P(Order | Design), one of the missing elements I said might make P2 work.  You're not exactly persuading me that this criticism was incorrect by providing a counterexample that isn't a counterexample.

The example was to illustrate how a justified inference to design works. Design inference is by definition a reasoning exercise. In this example, the conclusion is quickly arrived at--not chance, therefore design. The reasoning was not the probability of design. It was the improbability of chance. You can see that if you shorten the example. WEHOLD is very plausibly chance. There is no calculation of the probability of design.

You have touched on an important point. A lack of knowledge (of English in this case) might lead you to believe something is chance when it is not--universally undercutting the chance hypothesis--because perhaps we just don't have enough information. But a lack of knowledge does not easily lead someone to infer design because identifying the specified complexity as part of the reasoning process requires higher level of knowledge of what you are looking at. For example, the more we understand the laws of physics, we discover it is vastly more complex. More knowledge has actually strengthening the argument for design by undercutting the chance hypothesis.
Reply

Good Arguments (Certainty vs. Probability)
(11-17-2020, 04:23 AM)airportkid Wrote: Complexity in and of itself is not indicative of design.  The wheel is one of the least complex designs, yet we can infer design by its regularity.  But a wheel is so simple there probably are examples of natural wheels (and by wheel I don't mean merely circular but is used as a wheel.  Sunflowers are not wheels).

In an earlier post I defined the proof of design as regularity in reproduction, but it occurred to me there's an even more powerful indication of design:  absence of superfluous constituents.  A prototype will start out having components A, B and C, but by the time the device is put into production A and B are gone, C has been transmogrified into D19, and E, F and G have been incorporated.  Little to none of the prototype's elements remain that were found superfluous to the final design.

Not so in natural evolution.  Evolution is not a design process.  When we examine the most deeply complex specimens of biology we find superfluity accounting for a signficant makeup of the whole.  Evolution does not pare its mechanisms down to essentials because evolution is not a consciously directed process.

So, we find a pocketwatch on the beach.  We know it's a designed item NOT because it's complicated.  Instead, we know it's designed by the regularity and uniformity of its constituents AND that it embodies virtually NO superfluous elements.

Next to the watch is a sea slug.  We know its origins are from a fundamentally random succession of change because as we dissect it we find much that is superfluous to its existence.

Regarding 'regularity' for design inference. Regularity is neither necessary nor sufficient to infer design. If it were a requirement (necessary), we would never be able to infer design on the first encounter of anything or any one-of-a-kind items. If it were sufficient, then the moon rising on schedule would be a design inference. It will always come down to: specified complexity coupled with eliminating chance or necessity as reasonable explanations. At most, regularity is a further confirmation of design once you have cleared the initial hurdle.

'Superfluous' is also an inferred property that requires knowledge. Take "junk" (non-coding) DNA. It was thought that 99% of our DNA was useless. That has changes substantially and we find purposes for more and more each year. Obviously, a final assessment of 'superfluous' is contingent on a claim of having complete knowledge of something. Since that is actually impossible, it is hardly a good basis in which to defeat an otherwise strong design argument.

An example. If I discover what looks like a machine with many parts on the back side of the moon (where I know no human as ever been) and I have no idea the purpose. Your principle of regularity does not apply. I have no idea which parts if any are superfluous. Is it reasonable to infer design?
Reply

Good Arguments (Certainty vs. Probability)
(11-18-2020, 03:33 PM)SteveII Wrote: <Guff snip>

An example. If I discover what looks like a machine with many parts on the back side of the moon (where I know no human as ever been) and I have no idea the purpose.  Your principle of regularity does not apply. I have no idea which parts if any are superfluous. Is it reasonable to infer design?

You have two choices; the thing was either designed and left there, or it grew on the moon. Which is it?

My highlight.
Reply

Good Arguments (Certainty vs. Probability)
(11-18-2020, 02:52 PM)SteveII Wrote: "The fine-tuning of the universe" is a clear reference to specified complexity in P1. It is a fact that the difference between a universe that holds together and one that does not depends on the narrow possible values of a remarkable number of variables. Specified complexity is not used to support P2. If one is to object to the argument based on  the universe is not finely tuned, you would be objecting to P1. 

LOL LOL LOL
You have absolutely no information about universes that "don't hold together", (that's beneath even you Stevie), thus your statement is an assertion with absolutely not one shred of support. Or maybe you do ... lets see it, how about you tell us about these universes that did not hold together. What exactly are these "differences" you posit, (with no evidence).

Would that be the famous American Fundy Biola argument for an Inept Creator, OR "Designs that Didn't Work So Well, so the universe fell apart argument, but wait, he eventually figured out how to make one right" ? I think there needs to be a YouTube created about this profound argument.

If there are (and we don't know) universes that don't hold together, why exactly would your deity, with its perfect knowledge, be making universes that don't hold together ? Because its "experimenting" or is bored ? All you know about is a tiny fraction of evidence from ONE universe. if there are many others, you know nothing about them, and nothing about what "didn't" hold them together. The fine-tuning argument has never been about what holds universes together. It's always been about "fine-tuned" for life. You don't even get what the argument actually is about. And it's not "fine-tuned" for life. Life arises in many environments, ... including hydro-thermal vents, which most organisms could not survive. Complex organisms/life EVOLVES in many environments. 

P1 is the False Dilemma Fallacy, ("Black and White Thinking" Fallacy)
https://owl.excelsior.edu/argument-and-c...0of%20gray.
AND P2 is false as it incorrectly assumes P1 is true. 
Classic simple Freshman Logic error, thus the conclusion is false.
Test
Reply

Good Arguments (Certainty vs. Probability)
Quote:An example. If I discover what looks like a machine with many parts on the back side of the moon (where I know no human as ever been) and I have no idea the purpose. Your principle of regularity does not apply. I have no idea which parts if any are superfluous. Is it reasonable to infer design?

All these analogies are false, as theists are not just saying "needs a designer", ... they're saying it's *reasonable* to point to ONE (possible) designer, with no evidence.
Their argument is not "design" ... it's "my designer". It's a dishonest argument.

So, nope. Your "opinion" on the matter is irrelevant. If you know nothing about it, then the prudent reasonable thing to infer is *nothing* until you know more ... which is actually the position you are in with respect to anything you think is "too complex", and the universe. You still can't define what is "too complex" ... thus needing a designer, and what is not "too complex", and until you do you're blowing smoke out your ass. I get that the "fundamentalist" personality type consists in part of "low ambiguity tolerance" and a high "need for cognitive closure", but your psychological defects are not really good arguments, and if you don't like that, tough shit ... you thought you were able to identify "character flaws" in others.
Test
Reply

Good Arguments (Certainty vs. Probability)
It appears SteveII believes that only consciously directed design can produce something that exceeds some "specified complexity".  Is that right, SteveII?

If so, you need to define that specification.  And you cannot point to something complex and declare "well, THAT is plainly exceeding the threshold, even if we don't know what defines the threshold".  Without knowing the threshold it is impossible to make any meaningful declarations that refer to it.

But set that aside for a minute.  Imagine an item that is exactly at the threshold, but hasn't crossed it.  It is still uncomplicated enough to have come into being via a random process.  The random process is affected by cause/effect influences, so it's not chaotically random, but it is not being consciously directed toward some predetermined configuration/function.  This item has reached its present existence entirely by random means and is just not quite complex enough to require its existence having come via conscious drive.

Sitting there, it gets rained on, a random event.  Right away, an exposed ferrous surface starts to oxidize (rust).  The addition of rust increases the item's complexity:  its appearance has changed, its functionality might have gotten impaired, its dimensions are altered by the formation of the rust, etc. etc.  And it's crossed the complexity threshold into being complicated enough with the rust to require that it be the product of conscious design.

You see where this leads?  It derails the concept that complexity is a prerequisite for design.

You referred to the regularity of planetary orbit as negating regularity as an indicator of design.  Bad example.  Planetary orbits are virtually never regular; they're ellipses.  Of all the possible orbits a planet can fall into, the number of possible ellipses is infinite.  The number of possible regular circles is 1.  1 / infinity = high certainty any orbiting planet will be in an ellipse (steadily deteriorating into smaller ellipses).

I'd address more (and might later) but the phone is ringing.
Reply

Good Arguments (Certainty vs. Probability)
(11-18-2020, 06:34 PM)airportkid Wrote: It appears SteveII believes that only consciously directed design can produce something that exceeds some "specified complexity".  Is that right, SteveII?

No that is not right. Chance and necessity can create some complex things.

Quote:If so, you need to define that specification.  And you cannot point to something complex and declare "well, THAT is plainly exceeding the threshold, even if we don't know what defines the threshold".  Without knowing the threshold it is impossible to make any meaningful declarations that refer to it.

But set that aside for a minute.  Imagine an item that is exactly at the threshold, but hasn't crossed it.  It is still uncomplicated enough to have come into being via a random process.  The random process is affected by cause/effect influences, so it's not chaotically random, but it is not being consciously directed toward some predetermined configuration/function.  This item has reached its present existence entirely by random means and is just not quite complex enough to require its existence having come via conscious drive.

Sitting there, it gets rained on, a random event.  Right away, an exposed ferrous surface starts to oxidize (rust).  The addition of rust increases the item's complexity:  its appearance has changed, its functionality might have gotten impaired, its dimensions are altered by the formation of the rust, etc. etc.  And it's crossed the complexity threshold into being complicated enough with the rust to require that it be the product of conscious design.

You see where this leads?  It derails the concept that complexity is a prerequisite for design.

You referred to the regularity of planetary orbit as negating regularity as an indicator of design.  Bad example.  Planetary orbits are virtually never regular; they're ellipses.  Of all the possible orbits a planet can fall into, the number of possible ellipses is infinite.  The number of possible regular circles is 1.  1 / infinity = high certainty any orbiting planet will be in an ellipse (steadily deteriorating into smaller ellipses).

I'd address more (and might later) but the phone is ringing.

You are missing a key component. Complexity is only part of a inference to design. You must also assess reasonable chance and necessity. So, there is no single line of complexity to cross because it depends on the other two. If chance is vastly improbable, the complexity need not be all that great. Take the Scrabble-bag example from above. A large scrabble tile bag with large equal quantities of letters. On the first try, your friend drew in order the following:

WEHOLDTHESETRUTHSTOBESELFEVIDENTTHATALLMENARECREATEDEQUAL

It would be a very solid inference that design was behind this occurrence.

But what if we say that he drew tiles 16 million times before he got that string? We would accept that design was not behind it. Why? The complexity did not change one bit. We increased the chance to reasonable levels. By definition, an inference to design is an assessment of complexity vs chance/necessity. Zeroing in on complexity alone is to entirely miss the definition.

I don't understand your point about the planets. If, as you had claimed, 'regularity' was an indication of design, the moon appearing each night would be an indication of design. Regularity does not seem to have much to do with a design inference.
Reply

Good Arguments (Certainty vs. Probability)
Very highly improbable single events happen all the time.
I can throw some dice on a table, and with NO "design" come up with an arrangement that has a probability of less than 1/googleplex.
No design involved. "If chance is vastly (but never defined) improbable, the complexity need not be all that *great*, (but never defined) .... no.
Steve's argument is invalid. Nothing is defined. Complexity is not defined. If they can't say "what is too complex to happen naturallY" ... WHAT IS THE BOUNDARY, they have no argument.

No one is actually talking about "WEHOLDTHESETRUTHSTOBESELFEVIDENTTHATALLMENARECREATEDEQUAL".
That is not what they are claiming in reality, ... is what is designed. It's a red herring. Complexity in nature does not fall out of a bag, in one try. It's a false argument. No one says that happens in nature.

Any argument (inference) for design includes knowing about a possible designer, IN ORDER TO MAKE the inference.
Steve's designer is Jebus. There is no designer we know about.
We also don't need one, as Chaos Theory already has accounted for what is observed.

"Intelligent Design" is a religious view, not a scientific theory, according to U.S. District Judge John E. Jones III in his historic decision in Kitzmiller v. Dover."
Intelligent Design is not science, and is not Logic. It's pure and simple, religion.

Design doen not meet the criteria of the definition of "inference".
Definition of inference : 1: something that is inferred especially : a conclusion or opinion that is formed because of known facts or evidence
2: the act or process of inferring (see INFER): such as
a: the act of passing from one proposition, statement, or judgment considered as true to another whose truth is believed to follow from that of the former
b: the act of passing from statistical sample data to generalizations (as of the value of population parameters) usually with calculated degrees of certainty
3: the premises and conclusion of a process of inferring

ID has the trappings of a logical argument, but is actually a con.
Complex structures seen in nature EVOLVED to a complex state, over millions of years.
All this BS about "coming upon" something on the moon or on a beach is the fallacy of the false analogy.
Complex things in nature EVOLVED over millions of years, (ie the complement cascade/system in immunology, or the coagulation cascade in hematology, or the Kreb's Cycle in cell metabolism).
No one "came upon them on a beach", or found them full blown on the far side of the moon, and no one saw them slowly evolve over millions of years. Once the initial event happens as in cell development) the probability of the NEXT natural event in the long chain forming the complex object is higher, and the next even higher. The system does not retain any initial probability after the chain of events has begun. The probability of the evolved complex system IS NOT the probability that Stevie is trying to MISREPRESENT as a complex object or system.

Design is a con. A god of the gaps argument religionists with little faith try to use to con their believers.
Steve is admitting that he *is* the ignorant remote tribal member, who never having seen or encountered a jet plane, finding one crashed on a river bank, can correctly infer that his god made it.
Test
The following 2 users Like Bucky Ball's post:
  • airportkid, Chas
Reply

Good Arguments (Certainty vs. Probability)
(11-18-2020, 02:52 PM)SteveII Wrote:
(11-16-2020, 11:59 PM)Reltzik Wrote:
(11-09-2020, 09:38 PM)SteveII Wrote: For reference:
P1. The fine-tuning of the universe is due to either physical necessity, chance, or design.
P2. It is not due to physical necessity or chance.
C. Therefore, it is due to design.

So, a fair, accurate, non-malicious depiction of my position regarding the Teleological argument would have read as:

P1':  If the premises of an argument are not adequately established, then the conclusion of the argument is also not established.  (At least, not by this particular argument.)
P2':  P2 of the Teleological argument is not adequately established.  (This is where I got into conditional probabilities as a way of addressing a few of the more common ways apologists feign to establish P2.)
L':  The Teleological Argument does not establish the conclusion of a designer's existence.

Note that I marked that as L' rather than C'.  That is because it is a lemma in a larger argument rather than the overall conclusion.  I then characterized the brazenness of the failures to establish P2 as something which could not be adequately attributed to stupidity, at least in the case of someone like WLC.  I marked this as part of a larger pattern within the field of apologetics, which I supported with a few other examples, and went on to express a very strong opinion that this was likely the result of deceptive conduct and deep character flaws on the part of the apologists.  But you wanted to focus on the Teleological Argument part, so let's file the rest of that away for another time.

I'm not interested in your opinions on apologist. I am interested in examining your reasoning.

You have listed above the typical response to the argument--denying P2--but that is not what you have been arguing. You have repeatedly said that the proponent of the argument has to engage whether design is probable. But your argument above entirely depends on undercutting P2 -- a premise that does not even mention design. So how do you make the move from the above to

(11-09-2020, 07:02 PM)Reltzik Wrote: Yes, that is EXACTLY the logic of the argument: inferring that it must be design on the grounds that the other two are unlikely, and making that inference WITHOUT actually establishing ANYTHING about how probable design is in comparison to the other two, because when have Christian apologists ever possessed a shred of the integrity required to hold their favorite answer to the same standard as the alternatives?  I am addressing the logic of the argument by showing it's missing a piece needed to be cogent, and you're saying I'm not addressing the logic of the argument on the grounds that I'm not skipping the same step that it does.  WTF?

You are "not addressing the logic of the argument." It seems if you actually inserted the 'skipped step', it would be a question-begging argument. Inferring something from valid premises is the way an argument works--especially when something like Design is a properly inferred property.  For your 'critique' to be successful, you would have to show that design is not properly inferred through reasoning. You don't show it, you asserted it.

I did expand on this, but I may not have done so clearly.  I'll give it another go.

To first clearly state my position on this, I maintain that simply establishing that a certain probability of an event is low in the abstract (meaning, unconditioned on knowing what things look like after) is not in and itself sufficient for dismissing it and embracing a mutually exclusive alternative as a plausible explanation when we observe the state of the world AFTER the fact.  For examining one explanation's unconditioned probability of occuring, identifying it as low, and thereby dismissing it as an unlikely explanation is not a rational process.  What we should be doing is comparing both conditioned probabilities, and if one is vastly more probable than the other then dismissing it.

Why is this important?  Let's look at an example.  Someone wins a lottery, an event we can easilly calculate as having a 10^-7 of occurring by chance.  Did this occur by chance, or by design?  Did the person win fairly or did they somehow cheat?  Well, since the odds of them winning by chance are only one in ten million, we can dismiss this as extremely improbable and conclude that they cheated, right?  No, for two reasons.  First, we have not examined how unlikely it might be that someone could win through cheating, and second, we are not examining the conditioned probability.  In other words, we shouldn't be asking what the odds of them winning by chance are.  Instead, we should ask what the odds of them winning by chance are GIVEN that they are holding a winning ticket.  To make this clearer, what if I added in the information that security around the lottery game was so tight that the chances of an attempt to win by cheating were 10^-12?  Which would be the more reasonable explanation to accept then?  The chance explanation, obviously.  And so the reasonableness of discarding chance as an explanation on the basis of probability hinges on what the probability of design might be.

In other words, the odds we actually care about are P(Win through Chance | Win) vs P(Win through Design | Win), and not P(Win through Chance) vs P(Win through Design).  If these Chance and Design are mutually exclusive, then knowing one of each pair is enough to find the other, so we don't need both.  But comparing P(Chance) with P(Design | Win) is not a rational process.

Now let's look at Dembski's treatment of this.  (I'll be working off of memory and google, because my library is shut down due to plague and I don't feel like throwing money at a fraud to buy the book.  And yes, the fact that Dembski puts forward this argument despite his background in mathematics and information theory perfectly positioning him to recognize everything that is wrong with it is enough to call him a fraud... but you don't care about that, so let's leave it aside.)  For all his many failures on this front (intentional or unintentional), Dembski did at least understand that his argument was probabilistic, and he (fairly arbitrarily) set the boundary at which he thought chance might be discarded as an explanation at 10^-150.  (I understand he later changed his preferred boundary, but he still deals with it in terms of probabilities.)  And to be clear, he's pulling a bait and switch when he applies this standard.  He's moving from the probability that we would see what we do given that it is the result of chance to the probability that chance was the explanation given what we see.  That requires either brazen equivocation, or the sort of additional information that I've been demanding.

So if you're citing him, and he thought this was about probabilities, then why don't you?  I'm not saying you can't part ways with him.  I'll even say that you should part ways with him.  But I'd like to know why you do.

(11-18-2020, 02:52 PM)SteveII Wrote:
Quote:
(11-09-2020, 09:38 PM)SteveII Wrote: Your error is that design is a property that is always inferred from two things: a) the presence of specified complexity (mentioned in P1') --not just complexity and 2) eliminating other possible causes.

No.  Design is not always inferred from specified complexity and the elimination of other possible causes  Specified complexity is often used as an attempt to support P2, but it is far from the only basis for it.  The Teleological Argument is ancient, and apologists were concluding design from it centuries before Creationists ever attempted a justification through specified complexity.  (I'll leave aside quibbles about whether his Intelligent Design position for a designer for biological systems counts as the Teleological Argument.  They're not exactly the same thing, but hey, close enough for this discussion.)

Or do you mean instead that specified complexity will always lead one to infer design?  I'm not sure which way you meant "always" in that quote.

Either way, you did NOT mention specified complexity in P1'.

"The fine-tuning of the universe" is a clear reference to specified complexity in P1.

The fine-tuning of the universe has been around as a concept for over a century.  Specified complexity entered the conversation in the late '70s or early '80s, and didn't get adapted (hijacked) until the 90's to something that at least seemed like it might be applicable to fine-tuning universe.  Am I supposed to imagine that every time someone mentioned the fine-tuning of the universe in the interval from 1913 through 1993, it was a reference to a concept that didn't exist yet?  Or that starting in '94, all the references that were to the octogenarian concept were now also talking about new one, when such references before had not?

No, this is not a clear reference at all.  These are two distinct concepts that someone might argue are similar or linked, but which could be and often are talked about separately.  One does not equate to or entail the other, and mentioning one should not automatically be taken as a mention of the other.

But okay, you seem to be equating the two.  I'll try to keep that in mind moving forward.

(11-18-2020, 02:52 PM)SteveII Wrote: It is a fact that the difference between a universe that holds together and one that does not depends on the narrow possible values of a remarkable number of variables. Specified complexity is not used to support P2. If one is to object to the argument based on  the universe is not finely tuned, you would be objecting to P1.

Come again?  P1 was setting up mutual exclusion between chance (including necessity) and design.  P2 was the elimination of chance as an explanation.  Are you saying that specified complexity or fine-tuning are being brought into the conversation to support mutual exclusivity between chance and design, rather than to eliminate chance as an explanation?  That strikes me as a very novel and confusing approach.

(11-18-2020, 02:52 PM)SteveII Wrote: You said "Design is not always inferred from specified complexity and the elimination of other possible causes". Well those two components are both necessary and sufficient for a design inference. You have argued other ways to infer design but they are neither necessary nor sufficient.

Oh, hey, we agree on something: other ways to infer design are not sufficient.  Lemme just file that away in case I need to quote it later.

(11-18-2020, 02:52 PM)SteveII Wrote:
Quote:Also, by your own words, you actually have to ELIMINATE chance as a possible cause and combine that, independently, with an observation of specified complexity in order to conclude design, rather than using specified complexity itself as a basis to eliminate chance.  Did you mean to phrase this a different way?

I'll amend to "eliminate as a reasonable explanation."

Does that mean you aren't using specified complexity as a basis to eliminate chance as a reasonable explanation?

(11-18-2020, 02:52 PM)SteveII Wrote:
Quote:
(11-09-2020, 09:38 PM)SteveII Wrote: Another analogy to show how the property of design is inferred: A large scrabble tile bag with large equal quantities of letters. Your friend said he drew in order the following:

WEHOLDTHESETRUTHSTOBESELFEVIDENTTHATALLMENARECREATEDEQUAL

Are we to believe him? No, we would not. Why? Because of a design inference due to the specified complexity and the very low probability of chance.  How about if he said he drew these?

XERTYHJKIOSYUIODPHOPOQIJNIBQLDKIJXMFLEKDNEVQKBZVSNWYKERLA

It is equally improbable to have drawn this specific string of letters. But, we would believe him. Why? There is no specified complexity and therefore no inference to design.

Okay, so we're dealing with Dembski's version of specified complexity, rather than the original notion.  Thought as much.

What strikes me about this example is that it is far from a counter of my contention that these inferences are based on other probabilities and likelihoods.  The only reason we would doubt the first string of letters was random is that it seems like something that an intelligent agent familiar with the Declaration of Independence might gravitate towards (which, yes, is your point).  If had absolutely no knowledge of the Declaration or how phonemes in English worked, I might well view this string of letters as unspecified complexity.  Going back to the Bayes, that is P(Order | Design), one of the missing elements I said might make P2 work.  You're not exactly persuading me that this criticism was incorrect by providing a counterexample that isn't a counterexample.

The example was to illustrate how a justified inference to design works. Design inference is by definition a reasoning exercise. In this example, the conclusion is quickly arrived at--not chance, therefore design. The reasoning was not the probability of design. It was the improbability of chance. You can see that if you shorten the example. WEHOLD is very plausibly chance. There is no calculation of the probability of design.

You have touched on an important point. A lack of knowledge (of English in this case) might lead you to believe something is chance when it is not--universally undercutting the chance hypothesis--because perhaps we just don't have enough information.  But a lack of knowledge does not easily lead someone to infer design because identifying the specified complexity as part of the reasoning process requires higher level of knowledge of what you are looking at. For example, the more we understand the laws of physics, we discover it is vastly more complex. More knowledge has actually strengthening the argument for design by undercutting the chance hypothesis.

Yes, shortening it to just WEHOLD makes it a lot less complex (that being specified complexity's definition of complexity).  But I was asking about specificity, not complexity.

Again, Dembski DID consider this a matter of probabilities, and while he was pretty loose in his definitions of what it means for something to be specified, he tended to use the term as conforming to some pattern that would have extremely low odds of having been conformed to by chance.  It wasn't enough that  the string of letters was long, it also had to match something (in this case, a line from the Declaration of Independence).  So what specification is the universe meeting?  It's not spelling out WEHOLDTHESETRUTHSETC, so what exactly are you saying it matches?  The laws of phsyics might be described as complex (though I feel that's more a subjectively relative measure in this case), but how are they specified?  To what are they conforming?

In any case, a lack of knowledge CAN lead someone to infer design rather than chance.  Going back to the glass planet (without the inscribed words), what if I had no idea how accretion could form planets naturally?  Would that lack of knowledge not make me more likely to infer design?
"To surrender to ignorance and call it God has always been premature, and it remains premature today." - Isaac Asimov
Reply

Good Arguments (Certainty vs. Probability)
(11-18-2020, 10:46 PM)Reltzik Wrote:
(11-18-2020, 02:52 PM)SteveII Wrote:
(11-16-2020, 11:59 PM)Reltzik Wrote: So, a fair, accurate, non-malicious depiction of my position regarding the Teleological argument would have read as:

P1':  If the premises of an argument are not adequately established, then the conclusion of the argument is also not established.  (At least, not by this particular argument.)
P2':  P2 of the Teleological argument is not adequately established.  (This is where I got into conditional probabilities as a way of addressing a few of the more common ways apologists feign to establish P2.)
L':  The Teleological Argument does not establish the conclusion of a designer's existence.

Note that I marked that as L' rather than C'.  That is because it is a lemma in a larger argument rather than the overall conclusion.  I then characterized the brazenness of the failures to establish P2 as something which could not be adequately attributed to stupidity, at least in the case of someone like WLC.  I marked this as part of a larger pattern within the field of apologetics, which I supported with a few other examples, and went on to express a very strong opinion that this was likely the result of deceptive conduct and deep character flaws on the part of the apologists.  But you wanted to focus on the Teleological Argument part, so let's file the rest of that away for another time.

I'm not interested in your opinions on apologist. I am interested in examining your reasoning.

You have listed above the typical response to the argument--denying P2--but that is not what you have been arguing. You have repeatedly said that the proponent of the argument has to engage whether design is probable. But your argument above entirely depends on undercutting P2 -- a premise that does not even mention design. So how do you make the move from the above to

(11-09-2020, 07:02 PM)Reltzik Wrote: Yes, that is EXACTLY the logic of the argument: inferring that it must be design on the grounds that the other two are unlikely, and making that inference WITHOUT actually establishing ANYTHING about how probable design is in comparison to the other two, because when have Christian apologists ever possessed a shred of the integrity required to hold their favorite answer to the same standard as the alternatives?  I am addressing the logic of the argument by showing it's missing a piece needed to be cogent, and you're saying I'm not addressing the logic of the argument on the grounds that I'm not skipping the same step that it does.  WTF?

You are "not addressing the logic of the argument." It seems if you actually inserted the 'skipped step', it would be a question-begging argument. Inferring something from valid premises is the way an argument works--especially when something like Design is a properly inferred property.  For your 'critique' to be successful, you would have to show that design is not properly inferred through reasoning. You don't show it, you asserted it.

I did expand on this, but I may not have done so clearly.  I'll give it another go.

To first clearly state my position on this, I maintain that simply establishing that a certain probability of an event is low in the abstract (meaning, unconditioned on knowing what things look like after) is not in and itself sufficient for dismissing it and embracing a mutually exclusive alternative as a plausible explanation when we observe the state of the world AFTER the fact.  For examining one explanation's unconditioned probability of occuring, identifying it as low, and thereby dismissing it as an unlikely explanation is not a rational process.  What we should be doing is comparing both conditioned probabilities, and if one is vastly more probable than the other then dismissing it.

Why is this important?  Let's look at an example.  Someone wins a lottery, an event we can easilly calculate as having a 10^-7 of occurring by chance.  Did this occur by chance, or by design?  Did the person win fairly or did they somehow cheat?  Well, since the odds of them winning by chance are only one in ten million, we can dismiss this as extremely improbable and conclude that they cheated, right?  No, for two reasons.  First, we have not examined how unlikely it might be that someone could win through cheating, and second, we are not examining the conditioned probability.  In other words, we shouldn't be asking what the odds of them winning by chance are.  Instead, we should ask what the odds of them winning by chance are GIVEN that they are holding a winning ticket.  To make this clearer, what if I added in the information that security around the lottery game was so tight that the chances of an attempt to win by cheating were 10^-12?  Which would be the more reasonable explanation to accept then?  The chance explanation, obviously.  And so the reasonableness of discarding chance as an explanation on the basis of probability hinges on what the probability of design might be.

In other words, the odds we actually care about are P(Win through Chance | Win) vs P(Win through Design | Win), and not P(Win through Chance) vs P(Win through Design).  If these Chance and Design are mutually exclusive, then knowing one of each pair is enough to find the other, so we don't need both.  But comparing P(Chance) with P(Design | Win) is not a rational process.

Now let's look at Dembski's treatment of this.  (I'll be working off of memory and google, because my library is shut down due to plague and I don't feel like throwing money at a fraud to buy the book.  And yes, the fact that Dembski puts forward this argument despite his background in mathematics and information theory perfectly positioning him to recognize everything that is wrong with it is enough to call him a fraud... but you don't care about that, so let's leave it aside.)  For all his many failures on this front (intentional or unintentional), Dembski did at least understand that his argument was probabilistic, and he (fairly arbitrarily) set the boundary at which he thought chance might be discarded as an explanation at 10^-150.  (I understand he later changed his preferred boundary, but he still deals with it in terms of probabilities.)  And to be clear, he's pulling a bait and switch when he applies this standard.  He's moving from the probability that we would see what we do given that it is the result of chance to the probability that chance was the explanation given what we see.  That requires either brazen equivocation, or the sort of additional information that I've been demanding.

So if you're citing him, and he thought this was about probabilities, then why don't you?  I'm not saying you can't part ways with him.  I'll even say that you should part ways with him.  But I'd like to know why you do.

Your lottery example has problems. To be analogous to the beginning of the universe, the probability of cheating cannot be calculated--not simply difficult, actually impossible. Someone already won the lottery. You cannot go back and assess the possible ways/opportunities people could have cheated and come up with a probability. That assessment would be based on a priori knowledge of the process that produced the result you are examining.

The problem seem to be you have applied a counter argument for Intelligent Design in Biology to the question of the initial conditions of the universe. The two are fundamentally different. The amount of information you need to assess what evolution can or cannot do is staggering but you can get to work on it. The Teleological Argument addressed only a dozen or so constants that could have had other values but turned out to be just so. There is no process to examine.

A more analogous counter-example will illustrate: There is a drawing of identical ping-pong balls by a machine that works flawlessly to randomize them. To keep the numbers similar, lets say there are 1,000,000 identical balls except there are 999,999 white balls and one red one. The odds of a red call coming down the chute is one-in-a-million. The first drawing, a red one comes down. Improbable, but reasonable. You do the drawing 11 more times and red comes down every time. No other information can be known--only the probability of chance 1:1000000000000 (and 60 more zeros). Is it reasonable to infer something is fixed?

This is about probabilities. The Teleological Argument in an inductive argument, not deductive and as such, probabilistic.

Quote:
(11-18-2020, 02:52 PM)SteveII Wrote:
Quote:No.  Design is not always inferred from specified complexity and the elimination of other possible causes  Specified complexity is often used as an attempt to support P2, but it is far from the only basis for it.  The Teleological Argument is ancient, and apologists were concluding design from it centuries before Creationists ever attempted a justification through specified complexity.  (I'll leave aside quibbles about whether his Intelligent Design position for a designer for biological systems counts as the Teleological Argument.  They're not exactly the same thing, but hey, close enough for this discussion.)

Or do you mean instead that specified complexity will always lead one to infer design?  I'm not sure which way you meant "always" in that quote.

Either way, you did NOT mention specified complexity in P1'.

"The fine-tuning of the universe" is a clear reference to specified complexity in P1.

The fine-tuning of the universe has been around as a concept for over a century.  Specified complexity entered the conversation in the late '70s or early '80s, and didn't get adapted (hijacked) until the 90's to something that at least seemed like it might be applicable to fine-tuning universe.  Am I supposed to imagine that every time someone mentioned the fine-tuning of the universe in the interval from 1913 through 1993, it was a reference to a concept that didn't exist yet?  Or that starting in '94, all the references that were to the octogenarian concept were now also talking about new one, when such references before had not?

No, this is not a clear reference at all.  These are two distinct concepts that someone might argue are similar or linked, but which could be and often are talked about separately.  One does not equate to or entail the other, and mentioning one should not automatically be taken as a mention of the other.

But okay, you seem to be equating the two.  I'll try to keep that in mind moving forward.

"Fine-tuning" refers to a specific set of values necessary for intelligent life. It would be a useless if the subject of the argument was a set of complex variables not conducive to intelligent life. When discussing whether a design inference is appropriate, the subject has to be of specific type of complexity--not just any complexity, ergo specified complexity is the best term.

Quote:
(11-18-2020, 02:52 PM)SteveII Wrote: It is a fact that the difference between a universe that holds together and one that does not depends on the narrow possible values of a remarkable number of variables. Specified complexity is not used to support P2. If one is to object to the argument based on  the universe is not finely tuned, you would be objecting to P1.

Come again?  P1 was setting up mutual exclusion between chance (including necessity) and design.  P2 was the elimination of chance as an explanation.  Are you saying that specified complexity or fine-tuning are being brought into the conversation to support mutual exclusivity between chance and design, rather than to eliminate chance as an explanation?  That strikes me as a very novel and confusing approach.

Part of the claim in P1 is that the universe is indeed finely-tuned for intelligent life. If an objection is made on the grounds that the universe is not finely-tuned (that life could have evolved in a wider range of values), it is an objection to P1. If you accept that the universe is finely tuned for life (as many scientists admit) but object to the argument because you think chance is a reasonable explanation, you will object to P2.

Quote:
(11-18-2020, 02:52 PM)SteveII Wrote: You said "Design is not always inferred from specified complexity and the elimination of other possible causes". Well those two components are both necessary and sufficient for a design inference. You have argued other ways to infer design but they are neither necessary nor sufficient.

Oh, hey, we agree on something: other ways to infer design are not sufficient.  Lemme just file that away in case I need to quote it later.

(11-18-2020, 02:52 PM)SteveII Wrote:
Quote:Also, by your own words, you actually have to ELIMINATE chance as a possible cause and combine that, independently, with an observation of specified complexity in order to conclude design, rather than using specified complexity itself as a basis to eliminate chance.  Did you mean to phrase this a different way?

I'll amend to "eliminate as a reasonable explanation."

Does that mean you aren't using specified complexity as a basis to eliminate chance as a reasonable explanation?

(11-18-2020, 02:52 PM)SteveII Wrote:
Quote:Okay, so we're dealing with Dembski's version of specified complexity, rather than the original notion.  Thought as much.

What strikes me about this example is that it is far from a counter of my contention that these inferences are based on other probabilities and likelihoods.  The only reason we would doubt the first string of letters was random is that it seems like something that an intelligent agent familiar with the Declaration of Independence might gravitate towards (which, yes, is your point).  If had absolutely no knowledge of the Declaration or how phonemes in English worked, I might well view this string of letters as unspecified complexity.  Going back to the Bayes, that is P(Order | Design), one of the missing elements I said might make P2 work.  You're not exactly persuading me that this criticism was incorrect by providing a counterexample that isn't a counterexample.

The example was to illustrate how a justified inference to design works. Design inference is by definition a reasoning exercise. In this example, the conclusion is quickly arrived at--not chance, therefore design. The reasoning was not the probability of design. It was the improbability of chance. You can see that if you shorten the example. WEHOLD is very plausibly chance. There is no calculation of the probability of design.

You have touched on an important point. A lack of knowledge (of English in this case) might lead you to believe something is chance when it is not--universally undercutting the chance hypothesis--because perhaps we just don't have enough information.  But a lack of knowledge does not easily lead someone to infer design because identifying the specified complexity as part of the reasoning process requires higher level of knowledge of what you are looking at. For example, the more we understand the laws of physics, we discover it is vastly more complex. More knowledge has actually strengthening the argument for design by undercutting the chance hypothesis.

Yes, shortening it to just WEHOLD makes it a lot less complex (that being specified complexity's definition of complexity).  But I was asking about specificity, not complexity.

Again, Dembski DID consider this a matter of probabilities, and while he was pretty loose in his definitions of what it means for something to be specified, he tended to use the term as conforming to some pattern that would have extremely low odds of having been conformed to by chance.  It wasn't enough that  the string of letters was long, it also had to match something (in this case, a line from the Declaration of Independence).  So what specification is the universe meeting?  It's not spelling out WEHOLDTHESETRUTHSETC, so what exactly are you saying it matches?  The laws of phsyics might be described as complex (though I feel that's more a subjectively relative measure in this case), but how are they specified?  To what are they conforming?

In any case, a lack of knowledge CAN lead someone to infer design rather than chance.  Going back to the glass planet (without the inscribed words), what if I had no idea how accretion could form planets naturally?  Would that lack of knowledge not make me more likely to infer design?

I think a lot of this latter part was addressed in my reply above.
Reply

Good Arguments (Certainty vs. Probability)
(11-19-2020, 07:17 PM)SteveII Wrote:
(11-18-2020, 10:46 PM)Reltzik Wrote:
(11-18-2020, 02:52 PM)SteveII Wrote: I'm not interested in your opinions on apologist. I am interested in examining your reasoning.

You have listed above the typical response to the argument--denying P2--but that is not what you have been arguing. You have repeatedly said that the proponent of the argument has to engage whether design is probable. But your argument above entirely depends on undercutting P2 -- a premise that does not even mention design. So how do you make the move from the above to


You are "not addressing the logic of the argument." It seems if you actually inserted the 'skipped step', it would be a question-begging argument. Inferring something from valid premises is the way an argument works--especially when something like Design is a properly inferred property.  For your 'critique' to be successful, you would have to show that design is not properly inferred through reasoning. You don't show it, you asserted it.

I did expand on this, but I may not have done so clearly.  I'll give it another go.

To first clearly state my position on this, I maintain that simply establishing that a certain probability of an event is low in the abstract (meaning, unconditioned on knowing what things look like after) is not in and itself sufficient for dismissing it and embracing a mutually exclusive alternative as a plausible explanation when we observe the state of the world AFTER the fact.  For examining one explanation's unconditioned probability of occuring, identifying it as low, and thereby dismissing it as an unlikely explanation is not a rational process.  What we should be doing is comparing both conditioned probabilities, and if one is vastly more probable than the other then dismissing it.

Why is this important?  Let's look at an example.  Someone wins a lottery, an event we can easilly calculate as having a 10^-7 of occurring by chance.  Did this occur by chance, or by design?  Did the person win fairly or did they somehow cheat?  Well, since the odds of them winning by chance are only one in ten million, we can dismiss this as extremely improbable and conclude that they cheated, right?  No, for two reasons.  First, we have not examined how unlikely it might be that someone could win through cheating, and second, we are not examining the conditioned probability.  In other words, we shouldn't be asking what the odds of them winning by chance are.  Instead, we should ask what the odds of them winning by chance are GIVEN that they are holding a winning ticket.  To make this clearer, what if I added in the information that security around the lottery game was so tight that the chances of an attempt to win by cheating were 10^-12?  Which would be the more reasonable explanation to accept then?  The chance explanation, obviously.  And so the reasonableness of discarding chance as an explanation on the basis of probability hinges on what the probability of design might be.

In other words, the odds we actually care about are P(Win through Chance | Win) vs P(Win through Design | Win), and not P(Win through Chance) vs P(Win through Design).  If these Chance and Design are mutually exclusive, then knowing one of each pair is enough to find the other, so we don't need both.  But comparing P(Chance) with P(Design | Win) is not a rational process.

Now let's look at Dembski's treatment of this.  (I'll be working off of memory and google, because my library is shut down due to plague and I don't feel like throwing money at a fraud to buy the book.  And yes, the fact that Dembski puts forward this argument despite his background in mathematics and information theory perfectly positioning him to recognize everything that is wrong with it is enough to call him a fraud... but you don't care about that, so let's leave it aside.)  For all his many failures on this front (intentional or unintentional), Dembski did at least understand that his argument was probabilistic, and he (fairly arbitrarily) set the boundary at which he thought chance might be discarded as an explanation at 10^-150.  (I understand he later changed his preferred boundary, but he still deals with it in terms of probabilities.)  And to be clear, he's pulling a bait and switch when he applies this standard.  He's moving from the probability that we would see what we do given that it is the result of chance to the probability that chance was the explanation given what we see.  That requires either brazen equivocation, or the sort of additional information that I've been demanding.

So if you're citing him, and he thought this was about probabilities, then why don't you?  I'm not saying you can't part ways with him.  I'll even say that you should part ways with him.  But I'd like to know why you do.

Your lottery example has problems. To be analogous to the beginning of the universe, the probability of cheating cannot be calculated--not simply difficult, actually impossible. Someone already won the lottery. You cannot go back and assess the possible ways/opportunities people could have cheated and come up with a probability. That assessment would be based on a priori knowledge of the process that produced the result you are examining.

That was more an example by which we would know, from the outside illustration, that the reasoning was horribly flawed, and the people inside the example needn't know how secure the systems were at all.

But okay, I'll play along.  Let's say we have no information about how difficult the security measures are to beat.  If the odds of a successful cheat of the system are much greater than a chance win, 1e-2 for example, then design would be the reasonable inference, and if they were much lower, let's reuse 1e-12 for the example, then chance would be the reasonable inference.  But we don't actually know if the chance of a win by design is a lot higher, a lot lower, or fairly comparable to the odds of a win by chance.  How can we compare the two probabilities if we don't have the second one, as you insist we can't?  And if we don't know this probability, then why would someone arguing that the winner cheated just automatically assume that a cheat was the reasonable inference?  There are two other options (similar probabilities and probabilities vastly in favor of a fair win) on the table, and now after this restriction of knowledge about the odds of beating security we have no way of knowing which of the three options are more likely.  On what basis do we favor the one above the other two, other than preexisting bias?  This step is missing. 

(11-19-2020, 07:17 PM)SteveII Wrote: The problem seem to be you have applied a counter argument for Intelligent Design in Biology to the question of the initial conditions of the universe. The two are fundamentally different. The amount of information you need to assess what evolution can or cannot do is staggering but you can get to work on it. The Teleological Argument addressed only a dozen or so constants that could have had other values but turned out to be just so. There is no process to examine.

The fine-tuning argument addresses those (or at least tried to).  We could treat the fine tuning argument as a variation on the teleological argument or an example of a larger class of teleological arguments, but just as fine-tuning was around for decades before specified complexity was thrown into the mix, the teleological argument was around for millennia before the constants you're talking about were identified.  I guess you're conflating these two as well, and I'll try to keep it in mind going forward.

You fairly criticized the lottery example for assuming a-priori knowledge, but the fine-tuning argument does the same and more.  It isn't just making a-priori assumptions about the odds that a designer would set the constants this way, it also makes bald assertions that the constants ending up this way by chance are unlikely.

I don't know which list of a dozen or so constants you're talking about, and there have been so many such lists produced (many of which are absurd in their overeagerness to list anything that they'll throw in items that clearly aren't required for intelligent life or which are essentially restatements or implications of other items on the list) that I won't even try to google around to find which one you're talking about.  So I'll just pick the gravitational constant as an oft-cited example.

We can have an interesting argument about how finely-tuned this constant actually needs to be for some form of intelligent life to arise, but for the sake of argument I'll assume it needs to be a narrow range of values.  (And no, I won't define "narrow", because that's a huge mathematical can of worms and that ball is definitely not in my court.)  What are the ACTUAL odds that a universe that came about by chance would get a gravitational constant in that narrow range of values?

We don't know this value.  How would we even find it?  We can't take a frequentist approach, run hundreds of experiments of creating a universe, and statistically estimate the odds of getting that constant.  We can't take the classical approach because about the only thing we can possibly know about the probability distribution is that it's not uniform.  (Uniform distribution across the range of all real numbers is mathematically impossible.)  That leaves Bayesian probability (slightly related to, but distinct from, the Bayesian calculations I was using earlier... yes, some mathematicians get multiple things named after them and, yes, it is annoying).  Bayesian probability is basically just a subjective gut-check of how likely we think something is, and it's less an unbiased look at things and more a way of quantifying someone's bias.  It just can't do the job of proving objective facts, unless they're objective facts about how likely people think something to be.  I wouldn't list it as an option at all, except (A) it's the only one left and (B) it seems to be what the people advancing this argument are using.

So take my lotto example, and in addition to removing all knowledge about how likely it is for an attempt to cheat to succeed also remove all knowledge of how likely it is for a given ticket to win by chance.  All we know is that at least one person won.  On what basis do we conclude that this winner cheated?

(11-19-2020, 07:17 PM)SteveII Wrote: A more analogous counter-example will illustrate: There is a drawing of identical ping-pong balls by a machine that works flawlessly to randomize them. To keep the numbers similar, lets say there are 1,000,000 identical balls except there are 999,999 white balls and one red one. The odds of a red call coming down the chute is one-in-a-million.   The first drawing, a red one comes down. Improbable, but reasonable. You do the drawing 11 more times and red comes down every time. No other information can be known--only the probability of chance 1:1000000000000 (and 60 more zeros). Is it reasonable to infer something is fixed?

I'd add to that the possibility of the machine being broken and a few other options, but that it's somehow not behaving according to raw chance?  Yeah, that's a reasonable inference.  Now if we don't know a-priori how many balls of what colors are in the machine, and we get 12 red balls, why would we assume that the odds of the red ball popping out are 1e-7 as opposed to 1e-12 or 1e-1 or 9.999997e-7?

Are we allowing a-priori knowledge here, or not?  Or is this another of those apologist games where you demand that I have to play by restrictive rules and then you just ignore the rules all you want?  Because that sort of bottomless dearth of even the pretense of integrity really pisses me off.

(Who am I kidding, that's ALL apologetics.)

(11-19-2020, 07:17 PM)SteveII Wrote: This is about probabilities. The Teleological Argument in an inductive argument, not deductive and as such, probabilistic.

Probabilistic arguments ARE inductive arguments.  If I establish that the odds against rolling three dice and getting all 6s are less than 1%, that does not deductively prove that when I make that roll I won't get 6-6-6.  It's a strong inductive conclusion, but not a sound deductive conclusion.

(11-19-2020, 07:17 PM)SteveII Wrote:
Quote:
(11-18-2020, 02:52 PM)SteveII Wrote: "The fine-tuning of the universe" is a clear reference to specified complexity in P1.

The fine-tuning of the universe has been around as a concept for over a century.  Specified complexity entered the conversation in the late '70s or early '80s, and didn't get adapted (hijacked) until the 90's to something that at least seemed like it might be applicable to fine-tuning universe.  Am I supposed to imagine that every time someone mentioned the fine-tuning of the universe in the interval from 1913 through 1993, it was a reference to a concept that didn't exist yet?  Or that starting in '94, all the references that were to the octogenarian concept were now also talking about new one, when such references before had not?

No, this is not a clear reference at all.  These are two distinct concepts that someone might argue are similar or linked, but which could be and often are talked about separately.  One does not equate to or entail the other, and mentioning one should not automatically be taken as a mention of the other.

But okay, you seem to be equating the two.  I'll try to keep that in mind moving forward.

"Fine-tuning" refers to a specific set of values necessary for intelligent life. It would be a useless if the subject of the argument was a set of complex variables not conducive to intelligent life. When discussing whether a design inference is appropriate, the subject has to be of specific type of complexity--not just any complexity, ergo specified complexity is the best term.

Quote:
(11-18-2020, 02:52 PM)SteveII Wrote: It is a fact that the difference between a universe that holds together and one that does not depends on the narrow possible values of a remarkable number of variables. Specified complexity is not used to support P2. If one is to object to the argument based on  the universe is not finely tuned, you would be objecting to P1.

Come again?  P1 was setting up mutual exclusion between chance (including necessity) and design.  P2 was the elimination of chance as an explanation.  Are you saying that specified complexity or fine-tuning are being brought into the conversation to support mutual exclusivity between chance and design, rather than to eliminate chance as an explanation?  That strikes me as a very novel and confusing approach.

Part of the claim in P1 is that the universe is indeed finely-tuned for intelligent life. If an objection is made on the grounds that the universe is not finely-tuned (that life could have evolved in a wider range of values), it is an objection to P1. If you accept that the universe is finely tuned for life (as many scientists admit) but object to the argument because you think chance is a reasonable explanation, you will object to P2.

No, actually.  Fine-tuning receives equal mention in both premises and the conclusion, even if it's referred to by pronoun in P2 and C.  In none of these is the existence of fine-tuning actually the premise.  Rejecting P1 would reject only the idea that design and chance (and necessity) are the only possible explanations, and that should get some critique to.  But the objection I've been raising (spelling it out for you YET AGAIN) is that chance has not been eliminated as a reasonable explanation.  That's P2.

(11-19-2020, 07:17 PM)SteveII Wrote:
Quote:
(11-18-2020, 02:52 PM)SteveII Wrote: You said "Design is not always inferred from specified complexity and the elimination of other possible causes". Well those two components are both necessary and sufficient for a design inference. You have argued other ways to infer design but they are neither necessary nor sufficient.

Oh, hey, we agree on something: other ways to infer design are not sufficient.  Lemme just file that away in case I need to quote it later.

(11-18-2020, 02:52 PM)SteveII Wrote: I'll amend to "eliminate as a reasonable explanation."

Does that mean you aren't using specified complexity as a basis to eliminate chance as a reasonable explanation?

(11-18-2020, 02:52 PM)SteveII Wrote: The example was to illustrate how a justified inference to design works. Design inference is by definition a reasoning exercise. In this example, the conclusion is quickly arrived at--not chance, therefore design. The reasoning was not the probability of design. It was the improbability of chance. You can see that if you shorten the example. WEHOLD is very plausibly chance. There is no calculation of the probability of design.

You have touched on an important point. A lack of knowledge (of English in this case) might lead you to believe something is chance when it is not--universally undercutting the chance hypothesis--because perhaps we just don't have enough information.  But a lack of knowledge does not easily lead someone to infer design because identifying the specified complexity as part of the reasoning process requires higher level of knowledge of what you are looking at. For example, the more we understand the laws of physics, we discover it is vastly more complex. More knowledge has actually strengthening the argument for design by undercutting the chance hypothesis.

Yes, shortening it to just WEHOLD makes it a lot less complex (that being specified complexity's definition of complexity).  But I was asking about specificity, not complexity.

Again, Dembski DID consider this a matter of probabilities, and while he was pretty loose in his definitions of what it means for something to be specified, he tended to use the term as conforming to some pattern that would have extremely low odds of having been conformed to by chance.  It wasn't enough that  the string of letters was long, it also had to match something (in this case, a line from the Declaration of Independence).  So what specification is the universe meeting?  It's not spelling out WEHOLDTHESETRUTHSETC, so what exactly are you saying it matches?  The laws of phsyics might be described as complex (though I feel that's more a subjectively relative measure in this case), but how are they specified?  To what are they conforming?

In any case, a lack of knowledge CAN lead someone to infer design rather than chance.  Going back to the glass planet (without the inscribed words), what if I had no idea how accretion could form planets naturally?  Would that lack of knowledge not make me more likely to infer design?

I think a lot of this latter part was addressed in my reply above.

Complexity, as Dembski defined it, wasn't really about the length of the sequence, but the improbability of getting that particular sequence.  The two are linked, of course, but if we imagine that the bag of scrabble tiles contained a million E tiles and one Q tile and nothing else, getting a thousand E tiles in a row would not be complex in the same way as drawing the much shorter quote from the Declaration would be from a bag with letters in the proportions normal to a scrabble game.  So since we're now talking about fine-tuning constants, how are you determining that the letters in the bag were in proportions that are default for a scrabble game?  Or do you think you just get to assume that a-priori on no basis whatsoever?
"To surrender to ignorance and call it God has always been premature, and it remains premature today." - Isaac Asimov
Reply

Good Arguments (Certainty vs. Probability)
I would like a list of scientists who actually agree that the universe is fine-tuned for life.
Test
The following 1 user Likes Bucky Ball's post:
  • TonyAnkle
Reply

Good Arguments (Certainty vs. Probability)
(11-20-2020, 10:00 PM)Bucky Ball Wrote: I would like a list of scientists who actually agree that the universe is fine-tuned for life.

Good luck with that.
Reply

Good Arguments (Certainty vs. Probability)
(11-19-2020, 07:17 PM)SteveII Wrote: ... A ton of stuff....

 Hi Steve! Big Grin

So, you're in the camp of 'Tine tuned'. Okay, cool.

Here's a youtube vid from some one actually learning about and getting ready to be a profesional in the biological sciences.



The person he's reveiwing is a respected sceintist (James Tour) who's work in nano-tech (I think?) is widely respected BUT, however, still holds to the 'Must have been made' stance such as Dembski and others.

Hope you find the review informative. Thumbs Up 

Cheers,

Not at work.
The following 1 user Likes Peebothuhlu's post:
  • skyking
Reply

Good Arguments (Certainty vs. Probability)
Tour is a nanotechnologist. He is a Professor of Chemistry, Professor of Materials Science and NanoEngineering, and Professor of Computer Science at Rice University in Houston, Texas. He is not a Biochemist. Any thing he says, is the Argumantum ad Vericundiam. He is not an expert in Biochemistry.
He is not an authority on the origins of anything.
Test
Reply

Good Arguments (Certainty vs. Probability)
(11-21-2020, 12:09 AM)Bucky Ball Wrote: Tour is a nanotechnologist. He is a Professor of Chemistry, Professor of Materials Science and NanoEngineering, and Professor of Computer Science at Rice University in Houston, Texas. He is not a Biochemist. Any thing he says, is the Argumantum ad Vericundiam. He is not an expert in Biochemistry.
He is not an authority on the origins of anything.

Yes, the reveiwer explains Dr Tour's back ground and why the good Doctor is pretty much talking out his @ss.

  Thumbs Up 

Cheers.

Not at work.
The following 1 user Likes Peebothuhlu's post:
  • skyking
Reply

Good Arguments (Certainty vs. Probability)
(11-21-2020, 12:13 AM)Peebothuhlu Wrote:
(11-21-2020, 12:09 AM)Bucky Ball Wrote: Tour is a nanotechnologist. He is a Professor of Chemistry, Professor of Materials Science and NanoEngineering, and Professor of Computer Science at Rice University in Houston, Texas. He is not a Biochemist. Any thing he says, is the Argumantum ad Vericundiam. He is not an expert in Biochemistry.
He is not an authority on the origins of anything.

Yes, the reveiwer explains Dr Tour's back ground and why the good Doctor is pretty much talking out his @ss.

  Thumbs Up 

Cheers.

Not at work.

Agree. Sorry ... I didn't mean to imply I was disagreeing with the reviewer. I didn't watch much of it.  Dodgy

Not at work, but I probably should be. But then maybe I'll go in on Saturday, so never mind.
Test
The following 2 users Like Bucky Ball's post:
  • Peebothuhlu, skyking
Reply

Good Arguments (Certainty vs. Probability)
(11-20-2020, 10:10 PM)TonyAnkle Wrote:
(11-20-2020, 10:00 PM)Bucky Ball Wrote: I would like a list of scientists who actually agree that the universe is fine-tuned for life.

Good luck with that.

Looks like, even according to AIG, scientists are a godless lot.
https://answersingenesis.org/who-is-god/...re-survey/
Test
The following 1 user Likes Bucky Ball's post:
  • TonyAnkle
Reply

Good Arguments (Certainty vs. Probability)
(11-22-2020, 11:11 PM)Bucky Ball Wrote: Looks like, even according to AIG, scientists are a godless lot.
https://answersingenesis.org/who-is-god/...re-survey/

AiG is noted for its collective ignorance, blatant untruths, and misrepresentation of facts.

From your link:  "But a recent survey published in the leading science journal Nature conclusively
showed that the National Academy of Science is anti-God to the core".

Nope... the cited report says nothing of the sort.  Being an atheist is not about being "anti-God" at
all.  One cannot be against something that simply doesn't exist.  If that were the case, then I could
well be described as "anti-leprechaun".  Which of course is absurd.

The AiG coterie need to familiarise themselves with the term "ignostic", which is in fact what most
of the science-type atheists are  [the term "God" has no coherent and unambiguous definition].

This is just one of the many subtle—and sometimes unnoticed—distortions of terminology that AiG
uses to mount their invariably shaky cases, and attack disbelievers.
I'm a creationist;   I believe that man created God.
Reply

Good Arguments (Certainty vs. Probability)
(11-20-2020, 12:00 AM)Reltzik Wrote:
(11-19-2020, 07:17 PM)SteveII Wrote: Your lottery example has problems. To be analogous to the beginning of the universe, the probability of cheating cannot be calculated--not simply difficult, actually impossible. Someone already won the lottery. You cannot go back and assess the possible ways/opportunities people could have cheated and come up with a probability. That assessment would be based on a priori knowledge of the process that produced the result you are examining.

That was more an example by which we would know, from the outside illustration, that the reasoning was horribly flawed, and the people inside the example needn't know how secure the systems were at all.

But okay, I'll play along.  Let's say we have no information about how difficult the security measures are to beat.  If the odds of a successful cheat of the system are much greater than a chance win,  1e-2 for example, then design would be the reasonable inference, and if they were much lower, let's reuse 1e-12 for the example, then chance would be the reasonable inference.  But we don't actually know if the chance of a win by design is a lot higher, a lot lower, or fairly comparable to the odds of a win by chance.  How can we compare the two probabilities if we don't have the second one, as you insist we can't?  And if we don't know this probability, then why would someone arguing that the winner cheated just automatically assume that a cheat was the reasonable inference?  There are two other options (similar probabilities and probabilities vastly in favor of a fair win) on the table, and now after this restriction of knowledge about the odds of beating security we have no way of knowing which of the three options are more likely.  On what basis do we favor the one above the other two, other than preexisting bias?  This step is missing. 

I am not saying your reasoning is flawed. I am saying that you are focusing on why a particular person won (the odds are that someone will--otherwise no one would buy the ticket). The Teleological argument is focusing on why anyone won--given the staggering odds that no one should have. It is vastly more probable that a life-prohibiting universe would exists. That's why I gave the better analogy if the ball drawing (I'll pick this up below).

Quote:
(11-19-2020, 07:17 PM)SteveII Wrote: The problem seem to be you have applied a counter argument for Intelligent Design in Biology to the question of the initial conditions of the universe. The two are fundamentally different. The amount of information you need to assess what evolution can or cannot do is staggering but you can get to work on it. The Teleological Argument addressed only a dozen or so constants that could have had other values but turned out to be just so. There is no process to examine.

The fine-tuning argument addresses those (or at least tried to).  We could treat the fine tuning argument as a variation on the teleological argument or an example of a larger class of teleological arguments, but just as fine-tuning was around for decades before specified complexity was thrown into the mix, the teleological argument was around for millennia before the constants you're talking about were identified.  I guess you're conflating these two as well, and I'll try to keep it in mind going forward.

You fairly criticized the lottery example for assuming a-priori knowledge, but the fine-tuning argument does the same and more.  It isn't just making a-priori assumptions about the odds that a designer would set the constants this way, it also makes bald assertions that the constants ending up this way by chance are unlikely.

I don't know which list of a dozen or so constants you're talking about, and there have been so many such lists produced (many of which are absurd in their overeagerness to list anything that they'll throw in items that clearly aren't required for intelligent life or which are essentially restatements or implications of other items on the list) that I won't even try to google around to find which one you're talking about.  So I'll just pick the gravitational constant as an oft-cited example.

We can have an interesting argument about how finely-tuned this constant actually needs to be for some form of intelligent life to arise, but for the sake of argument I'll assume it needs to be a narrow range of values.  (And no, I won't define "narrow", because that's a huge mathematical can of worms and that ball is definitely not in my court.)  What are the ACTUAL odds that a universe that came about by chance would get a gravitational constant in that narrow range of values?

We don't know this value.  How would we even find it?  We can't take a frequentist approach, run hundreds of experiments of creating a universe, and statistically estimate the odds of getting that constant.  We can't take the classical approach because about the only thing we can possibly know about the probability distribution is that it's not uniform.  (Uniform distribution across the range of all real numbers is mathematically impossible.)  That leaves Bayesian probability (slightly related to, but distinct from, the Bayesian calculations I was using earlier... yes, some mathematicians get multiple things named after them and, yes, it is annoying).  Bayesian probability is basically just a subjective gut-check of how likely we think something is, and it's less an unbiased look at things and more a way of quantifying someone's bias.  It just can't do the job of proving objective facts, unless they're objective facts about how likely people think something to be.  I wouldn't list it as an option at all, except (A) it's the only one left and (B) it seems to be what the people advancing this argument are using.

I am getting the feeling that you are not entirely familiar with the scientific basis for the denial of Chance in P2. It is not simply a "narrow range" lending itself to lottery analogies. By any reasoning, these are probabilities in the 'incredulous range.' So as not to introduce any of my own bias, I will clip sections from the Wikipedia Article in Fine-Tuned Universe (https://en.wikipedia.org/wiki/Fine-tuned_universe)

Wikipedia Wrote:[from the intro] The characterization of the universe as finely tuned suggests that the occurrence of life in the Universe is very sensitive to the values of certain fundamental physical constants and that the observed values are, for some reason, improbable.[1] If the values of any of certain free parameters in contemporary physical theories had differed only slightly from those observed, the evolution of the Universe would have proceeded very differently and life as it is understood may not have been possible.[2][3][4][5]

[from Motivation] The premise of the fine-tuned universe assertion is that a small change in several of the physical constants would make the universe radically different. As Stephen Hawking has noted, "The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and the electron. ... The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life."[5]

The precise formulation of the idea is made difficult by the fact that physicists do not yet know how many independent physical constants there are. The current standard model of particle physics has 25 freely adjustable parameters and general relativity has one additional parameter, the cosmological constant, which is known to be non-zero, but profoundly small in value.

[from Examples]
Martin Rees formulates the fine-tuning of the universe in terms of the following six dimensionless physical constants.[2][15]

• N, the ratio of the electromagnetic force to the gravitational force between a pair of protons, is approximately 1036. According to Rees, if it were significantly smaller, only a small and short-lived universe could exist.[15]
• Epsilon (ε), a measure of the nuclear efficiency of fusion from hydrogen to helium, is 0.007: when four nucleons fuse into helium, 0.007 (0.7%) of their mass is converted to energy. The value of ε is in part determined by the strength of the strong nuclear force.[16] If ε were 0.006, only hydrogen could exist, and complex chemistry would be impossible. According to Rees, if it were above 0.008, no hydrogen would exist, as all the hydrogen would have been fused shortly after the Big Bang. Other physicists disagree, calculating that substantial hydrogen remains as long as the strong force coupling constant increases by less than about 50%.[13][15]
• Omega (Ω), commonly known as the density parameter, is the relative importance of gravity and expansion energy in the universe. It is the ratio of the mass density of the universe to the "critical density" and is approximately 1. If gravity were too strong compared with dark energy and the initial metric expansion, the universe would have collapsed before life could have evolved. On the other side, if gravity were too weak, no stars would have formed.[15][17]
• Lambda (Λ), commonly known as the cosmological constant, describes the ratio of the density of dark energy to the critical energy density of the universe, given certain reasonable assumptions such as positing that dark energy density is a constant. In terms of Planck units, and as a natural dimensionless value, the cosmological constant, Λ, is on the order of 10−122.[18] This is so small that it has no significant effect on cosmic structures that are smaller than a billion light-years across. If the cosmological constant were not extremely small, stars and other astronomical structures would not be able to form.[15]
• Q, the ratio of the gravitational energy required to pull a large galaxy apart to the energy equivalent of its mass, is around 10−5. If it is too small, no stars can form. If it is too large, no stars can survive because the universe is too violent, according to Rees.[15]
• D, the number of spatial dimensions in spacetime, is 3. Rees claims that life could not exist if there were 2 or 4 dimensions of spacetime nor if any other than 1 time dimension existed in spacetime.[15] However, contends Rees, this does not preclude the existence of ten-dimensional strings.[2]

Carbon and oxygen
An older example is the Hoyle state, the third-lowest energy state of the carbon-12 nucleus, with an energy of 7.656 MeV above the ground level.[19]:125–127 According to one calculation, if the state's energy level were lower than 7.3 or greater than 7.9 MeV, insufficient carbon would exist to support life. Furthermore, to explain the universe's abundance of carbon, the Hoyle state must be further tuned to a value between 7.596 and 7.716 MeV. A similar calculation, focusing on the underlying fundamental constants that give rise to various energy levels, concludes that the strong force must be tuned to a precision of at least 0.5%, and the electromagnetic force to a precision of at least 4%, to prevent either carbon production or oxygen production from dropping significantly.[20]

Dark Energy
A slightly larger quantity of dark energy, or a slightly larger value of the cosmological constant would have caused space to expand rapidly enough that galaxies would not form.[21]

Regarding Bayesian probability, I agree that that is applicable here. You are comparing probabilities--not trying to solve for them. Basically P(Order|Design) > P(Order|not-Design). The argument shows that P(Order|not-Design) is so low as to be approaching zero. P(Order|Design) is bolstered by the fact that life-permitting checks the box for specified complexity.

Quote:So take my lotto example, and in addition to removing all knowledge about how likely it is for an attempt to cheat to succeed also remove all knowledge of how likely it is for a given ticket to win by chance.  All we know is that at least one person won.  On what basis do we conclude that this winner cheated?

(11-19-2020, 07:17 PM)SteveII Wrote: A more analogous counter-example will illustrate: There is a drawing of identical ping-pong balls by a machine that works flawlessly to randomize them. To keep the numbers similar, lets say there are 1,000,000 identical balls except there are 999,999 white balls and one red one. The odds of a red call coming down the chute is one-in-a-million.   The first drawing, a red one comes down. Improbable, but reasonable. You do the drawing 11 more times and red comes down every time. No other information can be known--only the probability of chance 1:1000000000000 (and 60 more zeros). Is it reasonable to infer something is fixed?

I'd add to that the possibility of the machine being broken and a few other options, but that it's somehow not behaving according to raw chance?  Yeah, that's a reasonable inference.  Now if we don't know a-priori how many balls of what colors are in the machine, and we get 12 red balls, why would we assume that the odds of the red ball popping out are 1e-7 as opposed to 1e-12 or 1e-1 or 9.999997e-7?

Are we allowing a-priori knowledge here, or not?  Or is this another of those apologist games where you demand that I have to play by restrictive rules and then you just ignore the rules all you want?  Because that sort of bottomless dearth of even the pretense of integrity really pisses me off.

To continue from my first comment above, it is not why someone won, it is why anyone won. In a lottery you always have the "someone has to win." To put a finer point on my analogy, the red ball 12-times in a row means you live, any white--you die. We are not interested in why you got the particular ball you did--it is certain that you will get one, equally improbable ball. We are interested in why you wouldn't question why you got a red one, 12 times.

Quote:Complexity, as Dembski defined it, wasn't really about the length of the sequence, but the improbability of getting that particular sequence.  The two are linked, of course, but if we imagine that the bag of scrabble tiles contained a million E tiles and one Q tile and nothing else, getting a thousand E tiles in a row would not be complex in the same way as drawing the much shorter quote from the Declaration would be from a bag with letters in the proportions normal to a scrabble game.  So since we're now talking about fine-tuning constants, how are you determining that the letters in the bag were in proportions that are default for a scrabble game?  Or do you think you just get to assume that a-priori on no basis whatsoever?

There is no reason to think there was a control on the constants (a tile bag). There is no reason to think that every possible value was equally probable.
Reply

Good Arguments (Certainty vs. Probability)
Stevie is, shall we say *exagerating* (lying) about Martin Reese.
Reese's book is entitled "Six Numbers, The Deep Forces That Shape the Universe".
Nowhere does it say they were "fine-tuned", or "designed". It's about what is observed.
Just like the Big Bang, Fundy Christians like to "take over" the work of others, and use it (mispreresent it) for their ends.

The idea that a universe which existed for billions of years before, and will exist for billions of years after life exists, was "designed for life", is preposterous bullshit. There are no generalizations possible from one universe. (Stevie seems to know of others that failed ... LOL). If, as some think, many (a HUGE number of) universes arise, and evolve with the right parameters, then ONLY after looking at them, can anything be said about universes and life, (not just life-as-we-know-it-so-my-Bible-shit-can-be-true universes). If only some of the universes which arise are sustainable, and only some of them allow life, then it's a tautologous piece of nonsense that those are "fine-tuned". Life arises because the conditions ALLOW it, which is exactly what is observed ALL THE TIME, here. . "Fine-tuned" is precisely backwads. In conditions which allow for it, in a universe which is sustainable, and then life arises is fine-tuned for nothing. It's what happens when billions of universes arise. The unspoken premise/assumption here, is that "life as we know it" it the only way(s) life can arise. We simply don't know what happens under other conditions.
Test
The following 1 user Likes Bucky Ball's post:
  • SYZ
Reply




Users browsing this thread: 1 Guest(s)