Uncommon Descent Serving The Intelligent Design Community

A Darwinist responds to KF’s challenge

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

It has been more than a year since kairosfocus posted his now-famous challenge on Uncommon Descent, inviting Darwinists to submit an essay defending their views. A Darwinist named Petrushka has recently responded, over at The Skeptical Zone. (Petrushka describes himself as a Darwinist in a fairly broad sense of the term: he accepts common descent as a result of gradual, unguided change, which includes not only changes occurring as a result of natural selection but also neutral change.)

The terms of the original challenge issued by kairosfocus were as follows:

Compose your summary case for darwinism (or any preferred variant that has at least some significant support in the professional literature, such as punctuated equilibria etc) in a fashion that is accessible to the non-technical reader — based on empirical evidence that warrants the inference to body plan level macroevolution — in up to say 6,000 words [a chapter in a serious paper is often about that long]. Outgoing links are welcome so long as they do not become the main point. That is, there must be a coherent essay, with

(i) an intro,
(ii) a thesis,
(iii) a structure of exposition,
(iv) presentation of empirical warrant that meets the inference to best current empirically grounded explanation [–> IBCE] test for scientific reconstructions of the remote past,
(v) a discussion and from that
(vi) a warranted conclusion.

Your primary objective should be to show in this way, per IBCE, why there is no need to infer to design from the root of the Darwinian tree of life — cf. Smithsonian discussion here – on up (BTW, it will help to find a way to resolve the various divergent trees), on grounds that the Darwinist explanation, as extended to include OOL, is adequate to explain origin and diversification of the tree of life. A second objective of like level is to show how your thesis is further supported by such evidence as suffices to warrant the onward claim that is is credibly the true or approximately true explanation of origin and body-plan level diversification of life; on blind watchmaker style chance variation plus differential reproductive success, starting with some plausible pre-life circumstance.

It would be helpful if in that essay you would outline why alternatives such as design, are inferior on the evidence we face.

Here is Petrushka’s reply:

Evolution is the better model because it can be right or wrong, and its rightness or wrongness can be tested by observation and experiment.

For evolution to be true, molecular evolution must be possible. The islands of function must not be separated by gaps greater than what we observe in the various kinds of mutation. This is a testable proposition.

For evolution to be true, the fossil record must reflect sequential change. This is a testable proposition.

For evolution to be true, the earth must be old enough to have allowed time for these sequential changes. This is a testable proposition.

Evolution has entailments. It is the only model that has entailments. It is either right or wrong, and that is a necessary attribute of any theory or hypothesis.

Evolution is a better model for a second reason. It seeks regularities.

Regularity is the set of physical causes that includes uniform processes, chaos, complexity, stochastic events, and contingency. Regularity can include physical laws, mathematical expressions that predict relationships among phenomena. Regularity can include unpredictable phenomena, such as earthquakes, volcanoes, turbulence, and the single toss of dice.

Regularity can include unknown causes, as it did when the effects of radiation were first observed. It includes currently mysterious phenomena such as dark matter and energy. The principle has been applied to the study of psychic phenomena.

Regularity can include design, so long as one can talk about the methods and capabilities of the designer. One can study spider webs and bird nests and crime scenes and ancient pottery, because one can observe the agents producing the designed objects.

The common threads in all of science are the search for regularities and the insistence that models must have entailments, testable implications. Evolution is the only theory meeting these criteria.

One could assert that evolution is true, but it is more important to say it is a testable model. That is the minimum requirement to be science.

PS:

My references are the peer-reviewed literature. We can take them one by one, if kairosfocus deems it necessary to claim the publishing journals have overlooked errors of fact or interpretation.

PPS:

To make Dembski’s explanatory filter relevant, one must demonstrate that natural history is insufficient. So I will entertain ID arguments that can cite the actual history of the origin of life and point out the place where intervention was required or where some deviation from regular process occurred.

Same for complex structures such as flagella. Cite the actual history and point out where a saltation event occurred.

Or cite any specific reproductive event in the history of life and point out the discontinuity between generations.

PPPS:

If CSI or any of its variants are to be cited, please discuss whether different living things have different quantities of CSI. For example, does a human have more CSI than a mouse? Than an insect? Than an onion? Please show your calculation.

Alternatively, discuss whether a variant within a species can be shown to have more or less CSI than another variant. Perhaps a calculation of the CSI in Lenski’s bacteria before and after adaptation.

These are just proposed examples. Any specific calculation would be acceptable, provided it can provide a direct demonstration of different quantities of CSI in different organisms.

In his original challenge, kairosfocus promised:

I will give you two full days of comments before I post any full post level response; though others at UD will be free to respond in their own right.

So let’s hear it from viewers. What do readers think?

Comments
UB, okie, can we help? KF kairosfocus
KF at #29 and #130... lol, thank you for the encouragement. I'm a one horse parade over here ... steppin' as fast as I can. :) - - - - - - - SB, Chance, thanks :) Upright BiPed
Finally a new post- Someone named RoyC showed up on TSZ referencing Thorton et al., to refute Axe. These people have absolutely no idea... Joe
Joe, a FTR. On Islam 101, here (also cf declaration here and Nehls-Eric here at book length), on the Judao-Christian, triune concept of God, here . On the specifically NT, C1 concept of Jesus as Christ in light of the triune concept of the Godhead, here, leading to the gospel call to salvation, here. FTR, in summary, Islam does aim to worship the God of Abraham, however there are pivotal distinctives involving in key cases . . . there is specific text on this . . . a rejection of credible history (e.g. that Jesus was crucified under Pilate) and a misunderstanding of what Christians understand about God, God the Son incarnate as Christ and Mary. There are also pivotal philosophical distinctives as Pope Benedict XVI recently underscored, dealing with a focus on the absoluteness of Will in the concept of God vs a pattern that emphasises a cluster of characteristics: truth, love, purity and redemptive purpose as pivotal expression of will. I communicate for record, hoping to avoid further tangentiality. I trust you and/or others will find these links helpful. KF kairosfocus
Joe, You utterly missed the point of my question to Sal. He got what I meant and he answered my question. For some reason you don't. Okie dokie. CentralScrutinizer
So you meant to try to use Pascal to tell the difference between one entity. Got it. Joe
Joe @161, That has nothing to do with my post to Sal. But thanks anyway. CentralScrutinizer
NB: GP's first post is up, here. KF kairosfocus
CentralScrutinizer- My point is if it isn't in the Bible then Islam is not to blame for ignoring it. And it remains that the God of Islam is the God of Abraham. No ifs, ands, nor buts about it. And guess what? Jesus was not the God of Abraham. Nor was Jesus the God of Judaism. Capisce? The God of Abraham is the God of the Bible and the God of the Noble Qu'ran. Joe
Joe: Well Jesus = God is contrived.
You're changing the subject. CentralScrutinizer
Before I forget, Congrates Eric Anderson, GPuccio, Timaeus. Sal scordova
CentralScrutinizer:
However, Islam says Jesus is not God, and if you worship him you are an idolater and will burn in hell. Christianity says that Jesus is God, the only way to the Father, and if you don’t worship him you will go to hell.
Well Jesus = God is contrived. Joe
Of course, the well known design strategy of adapting the wheel, multiplied by using and modding more or less standard parts in a catalogue or library, is still on the table. kairosfocus
Oops, missed ref 3: >>3. Zuckerkandl and Pauling, "Evolutionary Divergence and Convergence in Proteins," in Evolving Genes and Proteins, Bryson and Vogel, eds. (Academic, 1965), 101.>> kairosfocus
F/N: Luskin, in Salvo, has a useful piece on the tree of life icon addressed in my challenge: ________ >> . . . [In the 1960's] Linus Pauling and Emile Zuckerkandl, boldly predicted that phylogenetic trees based upon molecular data would confirm expectations of common descent already held by evolutionary biologists who studied morphology (i.e., the physical traits of organisms). They declared, "If the two phylogenic trees are mostly in agreement with respect to the topology of branching, the best available single proof of the reality of macro-evolution would be furnished."3 Hoping to validate Pauling and Zuckerkandl's prediction, biologists set themselves to the task of sequencing genes from all manner of living organisms. Technologies were refined, genomes were sequenced, and new discoveries were made. One revolutionary discovery was made in the 1990s, when it was realized that the "five kingdoms" view of life, taught to many previous generations of students, was incomplete. Examination of the gene sequences of living organisms revealed instead that they fell into three basic domains: Archaea, Bacteria, and Eukarya. About the same time, another discovery was made that confounded evolutionary biologists who studied genes: they found that the three domains of life could not be resolved into a tree-like pattern. This led the prominent biochemist W. Ford Doolittle to famously lament: "Molecular phylogenists will have failed to find the 'true tree,' not because their methods are inadequate or because they have chosen the wrong genes, but because the history of life cannot properly be represented as a tree."4 He later acknowledged, "It is as if we have failed at the task that Darwin set for us: delineating the unique structure of the tree of life."5
[__________ 4. Doolittle, "Phylogenetic Classification and the Universal Tree," Science, 284:2124-28 (1999). 5. Doolittle, "Uprooting the Tree of Life," Scientific American (2000).]
Conflicts in the Trees The basic problem is that, while one gene leads to one version of the tree of life, another gene leads to an entirely different tree. What seems to imply a close evolutionary relationship in one case (i.e., two similar genes) doesn't do so in another. To put it another way, biological similarity is constantly being found in places where it wasn't predicted by common descent, leading to conflicts between phylogenetic trees. When two trees conflict, at least one must be wrong. How do we know that both aren't? . . . . Perhaps the most candid discussion of the problem came in a 2009 review article in New Scientist titled "Why Darwin Was Wrong about the Tree of Life."9 The author quoted researcher Eric Bapteste explaining that "the holy grail was to build a tree of life," but "today that project lies in tatters, torn to pieces by an onslaught of negative evidence." According to the article, "many biologists now argue that the tree concept is obsolete and needs to be discarded." The paper also recounted the results of a study by Michael Syvanen that compared 2,000 genes across six diverse animal phyla: "In theory, [Syvanen] should have been able to use the gene sequences to construct an evolutionary tree showing the relationships between the six animals. He failed. The problem was that different genes told contradictory evolutionary stories." Syvanen succinctly summarized the problem: "We've just annihilated the tree of life. It's not a tree any more, it's a different topology entirely. What would Darwin have made of that?" >> _________ And, we have not got to the root yet -- OOL. KF kairosfocus
F/N: Sewell has just released a smoking gun clip on the 1980 Field Museum closed door meeting, cf. my markup here. It's all there, the stuff we are so confidently told is our distortion. KF kairosfocus
F/N: It is beginning to look like, to keep up their position, evolutionary materialism advocates are increasingly being forced to deny the reality of genuine design and genuine intelligence, which imply a real ability to significantly freely make choices. A computer system is by and large deterministic (save where there are stochastic elements), and can be used to make programmed "decisions," but that is a case of displacing the real choices up one level, to the programmer and the designer of the hardware, or in some cases to the user. Of course, in viewing us as in effect glorified robots with computers for brains programmed by blind watchmaker chance and/or mechanical necessity through blind forces shaped genetic inheritance and/or cultural and psycho-social conditioning, such Darwinists radically undermine the freedom of mind to reason and know and end up in implicit self referential incoherence. As, has been pointed out repeatedly by ever so many, including here at UD. So, ironically, the ones who so often cloak themselves in the lab coats of evidence-based reason etc, thus undercut the foundations of reason and too often (usually inadvertently) open the door to cynical, amoral, nihilistic manipulation -- frequently via Plato's Cave shadow-shows. As has also been pointed out by many from Plato in The Laws Bk X on down to those who have pointed out the implications of the argument from reason. So, it is unsurprising that such will often project accusations of irrationality to those who beg to differ with them, the twist-about, turn-speech accusation tactic is notorious, and notoriously confuses the ordinary onlooker, creating a polarised, toxic, confusing atmosphere in which manipulation thrives. This, too has often been pointed out, including here at UD. It is time to see that the Emperor is naked at the head of the parade, even while imagining that he is dressed in gaudy array, and demanding that we admire his fancy new lab coat of many colours. KF kairosfocus
Joe @148: Same God, ie the God of Abraham.
However, Islam says Jesus is not God, and if you worship him you are an idolater and will burn in hell. Christianity says that Jesus is God, the only way to the Father, and if you don't worship him you will go to hell. Not so simple. CentralScrutinizer
Sal @145: Thanks for your thoughts, and I appreciate your reminder of how much it is that we don't know. I think the question of whether a machine can be considered "intelligent" is interesting, and your cell reproduction examples are interesting (though the regress to a mind still holds quite strongly in every known case). I'm focused on a slightly more nuanced point. Namely, the fact that intelligence -- by its very definition -- includes the ability to select between contingent possibilities. That is the essence of intelligence, and is precisely what makes design possible. The idea of a purely "deterministic intelligence" is, frankly, nonsense, an oxymoron. If something is deterministic then it belongs in the "necessity" category has no ability to make contingent selections. If it can make contingent selections then it is not deterministic. Couple that rather obvious observation with the fact that deterministic, law-like processes cannot, as a matter of principle, produce information-rich systems and we start to see how quickly the idea breaks down. Thus, the idea that an information-rich system with functional contingent characteristics could be produced by a deterministic entity/process is not only against all experience, it is also against any logic or rationale. It doesn't matter how many PhD's someone has. If they are suggesting that contingent information-rich systems can be designed by a deterministic process, then they are spouting self-contradictory bluster. Sometimes rather than being polite and giving countenance to all ideas, no matter how absurd, we just have to call it for what it is. Design without a designer -- yes, an "intelligent" designer; one that can make contingent selections -- is not an interesting thought-provoking idea that deserves to be taken seriously. It is self-contradictory on its face. It makes no more sense than if someone were to claim they had discovered a round square or a square circle. Eric Anderson
Pascal’s reasoning wouldn’t help you out if the true God happens to be the Islamic God and you worship the Christian God.
Pascal's wager and expected values in general don't tell you whether your wager will succeed, it tells you what is the most rational investment in a sea of uncertainty. Example: If you buy product insurance and you're product never breaks down, then you have made a losing wager. But given the sea of uncertainty, it may be a rational wager. For the case of Christianity vs. Islam, if one believes the payoffs are identical, the rational wager is to choose the one you believe has better evidence of being true. It does not mean your distribution function is correct, but it does state the rational move given your assumed distribution. As the old saying goes, "you play the hand you're dealt." On a less theological view, Design vs. Darwin. What is there to gain technologically if Darwin is right. Answer: 0. The Darwinists at UD have said as much here: If Darwinism were true, what is there to gain. However if ID were true, then that means biology is made by a brilliant designer and the structure of biology, like the privileged planet may be optimized to help us understand nano-technology. Bill Dembski calls this steganography. What I see is atheistic Darwinists running around promoting a zero payoff idea. The lack of rationality of doing this is jaw dropping, unless of course they have some financial or reputational stake in the matter. It's like they are trying to convince themselves they will not be accountable to God someday. I asked them point blank several times, "why do you guys defend it like the holy grail, what's the payoff to you if you're right?" scordova
SalC: I think the actual issue pivots on the nature of induction, and particularly inference to best explanation. 1 --> Deductive arguments are rooted in axioms and their implications under various transformations; which may be surprising to us because of our finitude. But they don't actuall strictly add new knowledge. 2 --> Induction, though it sacrifices certainty relative to premises, is ampliative; adding credible knowledge when done right. 3 --> In this case, we hope to understand/explain the remote, unobserved past of origins in light of traces in the present and characteristic causes of such signs to be explained. 4 --> In this sense, P is right to ask for regularities. What happens is s/he is not consistent enough, and fails to properly address the vera causa principle. 5 --> That is, Newton is right to insist that we explain traces of what we cannot directly observe on causal factors that per here and now studies consistently and characteristically account for the effects. Either, "all the time," or by setting up distributions that with reasonable frequency, give the effects. 6 --> Characteristically is pivotal, as Meyer discusses in Signature in the Cell. For if two or more factors may give rise to an effect with reasonably comparable plausibility, unless we can find an aspect that only one accounts for well, we are left with an unresolved ambiguity. Then, we have to choose on some form or another of Pascal's wager:err on the side of prudence, charity or the like. 7 --> Which, is why unless there is warrant beyond reasonable doubt, we find a criminal defendant not guilty in Common Law derived jurisdictions. 8 --> What happens is that we have found a cluster of items, mostly connected to FSCO/I that are highly characteristic of design, and which per needle in haystack analysis are utterly implausible on chance or on chance plus necessity. 9 --> Chance, being defined in terms of factors that give rise to high and stochastically distributed contingency with on average pattern that fit mathematical models of randomness. If mechanical necessity were dominant, there would not be high contingency of outcomes under similar initial circumstances. 10 --> Design, being much as described above:
(noun) a specification of an object, manifested by an agent, intended to accomplish goals, in a particular environment, using a set of primitive components, satisfying a set of requirements, subject to constraints; (verb, transitive) to create a design, in an environment (where the designer operates)[2 Ralph, P. and Wand, Y. (2009). A proposal for a formal definition of the design concept. In Lyytinen, K., Loucopoulos, P., Mylopoulos, J., and (Robinson, W.,) editors, Design Requirements Workshop (LNBIP 14), pp. 103–136. Springer-Verlag, p. 109 doi:10.1007/978-3-540-92966-6_6.]
11 --> Design is an observed process and is characteristic as an output of intelligent agents, on a huge observational base. Intelligence, being something which we exhibit, beavers exhibit to a lesser degree, etc; both humans and beavers being designers. 12 --> The inductive inference to best explanation is then quite simple: FSCO/I . . . a phenomenon as common as the text in this thread and the PCs etc that people use to read it . . . is real, observable and even measurable. It is on billions of cases a reliable product of design, only observed as a product of design, and per the needle in haystack analysis is only plausibly the result of intelligently directed contingency aka design. 13 --> So, if and when we see FSCO/I in traces from the remote past of origins, we are epistemologically and logically entitled to take it as a reliable sign pointing to design as the best causal explanation of the events or effects or objects exhibiting these traces. 14 --> This is strictly independent of opinions, views, rhetoric or accusations of those who may wish to differ; on evidence . . . such as the Lewontin admission in NYRB, or Dawkins' declamations and fulminations, etc etc . . . often because of hostility to a certain candidate designer, namely God. 15 --> A simple examination of the chain of reasoning above and its history back to Plato will show that the inference in view is a contrast between what blind chance and/or mechanical necessity can do and what art . . . design . . . can do. Those who refuse to acknowledge this simply show that they are failing to do duties of care to logic, accuracy, evidence, reason and fairness. 16 --> I am under no obligation whatsoever to try to take up the burden of showing to the unreasonable, wrong-headed and too often wrong-hearted (remember the sort of menacing threats that have been made against my family, including minor children), that they are in error; beyond laying out the case for record. 17 --> Which has long since been adequately done. 18 --> Further to this, take a moment to glance at the challenge of 1 1/2 years ago and the level and tone of response after all of that time. More than enough time to have written a complex thesis, much less lay out a case commonly asserted to be as sure as the roundness of the Earth or the orbiting of the planets around the Sun. 19 --> It is utterly clear that the Emperor is stark naked at the head of the parade, pretending to be in gaudy array. GAME OVER. KF kairosfocus
CentralScrutinizer:
Pascal’s reasoning wouldn’t help you out if the true God happens to be the Islamic God and you worship the Christian God.
Same God, ie the God of Abraham. Joe
Sal: If there is a chance the Designer is the Christian God to whom I one day will be accountable, then resolving the question absolutely is already moot as far as I’m concerned personally. I know enough from Pascal that it is bad bet to wager on the non-existence of God.
Pascal's reasoning wouldn't help you out if the true God happens to be the Islamic God and you worship the Christian God. Obviously there are more than two possible options here. CentralScrutinizer
Sal, According to Meyer the machines trace back to an intelligence. So yes machines can make a design because they were made to do so. the definition of intelligence: (the bold & underlined parts apply)
1a (1): the ability to learn or understand or to deal with new or trying situations : reason; also: the skilled use of reason 
      (2): the ability to apply knowledge to manipulate one's environment or to think     abstractly as measured by objective criteria (as tests)
Joe
What kind of design do you think can possibly — even in principle — exist without some kind of intelligence?
1. If a feature of intelligence is free will, but if the Designer is a deterministic entity without free will, then is it intelligent in our sense of intelligence? Spinoza and Einstein's God and the Deist God are probably deterministic in conception. Do I buy those arguments? No. Can I refute them on logic? I don't try, I don't see the point. If there is a chance the Designer is the Christian God to whom I one day will be accountable, then resolving the question absolutely is already moot as far as I'm concerned personally. I know enough from Pascal that it is bad bet to wager on the non-existence of God. I was one of the first to broadcast this peer-reviewed pro-ID article: https://uncommondescent.com/intelligent-design/another-pro-id-paper-passes-peer-review/ which admits the possibility of a deterministic designer, in which case it would be dubious that we might even call it intelligent in the first place. In defense of Voie, he only cited that as a possibility, I'm pretty sure he thinks the intelligence isn't a computer. 2. Mis-identified design because of faulty understanding of physical and chemical behavior. The classic example are the craters of the moon which were once thought to be created by civilization since they were so perfectly symmetrical. I myself made a design claim at UD that was eventually falsified and may need reworking or might be abandoned altogether. In fact it may have been rooted in a mistake in one of ID's founding books, Mystery of Life's Origin. See: CEU Forum Homochirality discussion 3. If we define "intelligence" as something that makes artifacts that pass the EF, then machines can be said to be intelligent, and there are ID proponents like myself that would lean in that direction. Of course machines need machine makers, and it seems it regresses back to God, imho. But suggesting that intelligence can be mechanical (like weak AI) didn't make me many friends at UD. I pointed out a sperm and ovum as far as we know are not sentient, yet they make sentient beings with non-material souls, or should we say they made bodies that house non-material souls. How do I resolve such paradoxes? I don't know, and I don't care, I only know I will pass away one day and may stand before my creator. I'm not wagering on His non-existence. But the problem of machines making artifacts that pass the EF remains. We can resolve it by saying: 1. machines are intelligent (anathema to most ID proponents) 2. machines aren't intelligent (and then that provides a proximal counter example to intelligence only being the cause of design, especially in the event the Designer is himself a deterministic entity like Einstein's God. ) I personally don't delve much into these questions because Pascal's wager has settled which ideas I will live by, and these question seem irrelevant to the reason I studied ID to begin with. scordova
Intelligence is that which can create counterflow. Counterflow being tat which nature, operating freely, would not or could not have produced- Ratzsch, "Nature, Design and Science". And we do make at least one prediction wrt designers, regardless of their capriciousness-> when they act within nature they tend to leave traces of their actions behind. These traces can be detected and investigated. Joe
Sal @ 141: What do you mean by "intelligence"? If we look at the very etymology of the word, it means "to choose between" contingent possibilities. That is precisely what design is about. What kind of design do you think can possibly -- even in principle -- exist without some kind of intelligence? Eric Anderson
RB: Thanks for the clarification, as it looked like you were posting your own comments. Thought you had gone off the deep end there for a moment. :) ----- Mapou @125:
Questions about the identity or methods of the designers may be second order to the design inference but, in the greater scheme of things, they are the first order questions that need to be asked.
Well, no doubt some people are interested in them more than they are interested in the design inference. And some people have taken up the concept of ID because they think it will help them prove their personal philosophical/religious ideas about the designer.* But that doesn't change the fact that the question about detecting design and the existence of a designer is both (i) what ID is really about, and (ii) logically prior to the second-order questions. If the answer to the design inference is negative, then we don't even get to the second-order questions.
The problem with the ID side of the debate is that they have allowed the opposition to dictate how the debate should be conducted. This is weak, in my opinion, and it’s probably the main reason that you are not winning.
I'm not sure what you mean by this. If you mean that the anti-ID folks would like to see objective, science-based explanations, then I certainly have no objection to that, and that isn't their idea anyway, it is a general rational approach to the study of the world around us. We should be thrilled to have the debate conducted on such grounds. However, if you mean that the anti-ID folks are insisting that the design inference be kept separate from the second-order questions, then you have it completely backwards. It is the primary ID proponents who have insisted from day one that they need to be kept separate, while the anti-ID proponents valiantly attempt at every turn to conflate the two. They would love nothing more than to make the whole discussion about the second-order questions and the possible philosophical and religious implications. Indeed, the philosophical and religious issues are a primary motive of most ardent anti-ID proponents. So, no, the anti-ID proponents don't get to dictate how the debate is conducted. But, unfortunately, when people who are sympathetic to ID start conflating the objective, observation-based, scientifically-sound design inference with a bunch of other second-order questions, they play right into the hands of the anti-ID crowd. I trust you can see that this is what I am trying to prevent, not what I am advocating. ----- * Incidentally, a week or two ago I watched a documentary about "God's grand creation" or some such title. They referenced "design" a number of times, and even "intelligent design" once or twice. There were some good, objective, science-based parts of the film. Unfortunately, there were other parts that made me grimace. I don't begrudge anyone who wants to use scripture, for example, as the basis for their truth claims (though such an exercise is fraught with difficulties that could be discussed another time). But it is a little disconcerting to see a mish-mash of various concepts thrown together without clear delineation. Although the materially-committed, anti-religious types of the world don't have a leg to stand on, occasionally I empathize after seeing stuff like that. It would be all to easy to come away thinking "this is just a bunch of religious mumbo jumbo." This is precisely the inaccurate impression of ID that we need to avoid. And we do so by carefully distinguishing between the evidence-based claims of ID and the second-order implications that might flow from an affirmative answer to the design inference. Eric Anderson
KF, There are two questions. 1. is ID the most reasonable assertion? I say yes. 2. is there much to be gained by insisting Intelligence is the best cause? Maybe, maybe not. If I'm talking to Nick, I don't insist on it, I don't have to, as you can see: Statistics Question for Nick. There was a reason I specifically asked Barry to pose the question to Nick in terms of chance or not-chance. I did not have the question posed to Nick about intelligent agencies. The reason is to give Nick and others no wiggle room except to look like a sophist in trying to argue in favor of chance. If I'm talking to you, I don't even have to say it because like you I think, "if there is a design, there must be a designer". I don't hold it against someone being skeptical of an ultimate intelligence or agnostic about an ultimate intelligence creating life -- Michael Denton, David Berlinski, Jack Trevors, Fredy Hoyle, Robert Jastrow will probably fall in that category. They are fair, but they wouldn't label themselves ID proponents. Being undecided is a respectable intellectual position, but obviously I think it is a risky spiritual position if the Designer is God. I certainly state what I believe, but I won't insist on ID inference with same certainty I insist on a plain vanilla D (design) inference. The Atheist/Agnostic Hoyle was a pioneer of ID theory, but in his case he was sort of pantheistic as far as the source of intelligence. Jack Trevors is another atheist, but he's argued better than almost anyone that life cannot be the product of chance and necessity. So there are people that have argued for D inferences quite vigorously without taking the final step and saying Intelligent Design or God Design. scordova
F/N: Observe, Gauger is addressing body plan origin, in this case the transformation of an ape-plan to a human one. I think we can take it to the bank that the concerns she highlights are not in the typical textbooks and documentaries or museum displays. KF kairosfocus
SalC: Let me clip and highlight the Wiki article, Design:
design has been defined as follows.
(noun) a specification of an object, manifested by an agent, intended to accomplish goals, in a particular environment, using a set of primitive components, satisfying a set of requirements, subject to constraints; (verb, transitive) to create a design, in an environment (where the designer operates)[2 Ralph, P. and Wand, Y. (2009). A proposal for a formal definition of the design concept. In Lyytinen, K., Loucopoulos, P., Mylopoulos, J., and (Robinson, W.,) editors, Design Requirements Workshop (LNBIP 14), pp. 103–136. Springer-Verlag, p. 109 doi:10.1007/978-3-540-92966-6_6.]
Another definition for design is a roadmap or a strategic approach for someone to achieve a unique expectation. It defines the specifications, plans, parameters, costs, activities, processes and how and what to do within legal, political, social, environmental, safety and economic constraints in achieving that objective.[3] Here, a "specification" can be manifested as either a plan or a finished product, and "primitives" are the elements from which the design object is composed . . .
In short, design is intelligently directed contingency, constrained to particular clusters of configurations by purpose. The use of "intelligent" therefore is a case of emphasis, meant to highlight agency involvement in the face of the "designoid" concept of things that only appear to be actually designed. (Similarly, in Newton's laws of motion L1 is F = 0 => a = 0, and is entailed by L2: F = dP/dt. But, for conceptual clarity, L1 is very important, identifying what inertia is.) ISCID . . . nope not THAT one . . . International Council on Societies of Industrial Design:
Design is a creative activity whose aim is to establish the multi-faceted qualities of objects, processes, services and their systems in whole life cycles. Therefore, design is the central factor of innovative humanisation of technologies and the crucial factor of cultural and economic exchange . . .
In short, intelligence can be taken as an empirically established phenomenon as common and familiar as writers making posts in blogs, even those objecting to the concepts of intelligence and design due to the fallacy of selective, double-standard based hyperskepticism. The point of design theory is that design leads to configs that often manifest empirically reliable signs of design. So in cases of origins science where we see such signs the vera causa principle allows us to reasonably infer the presence of designing as causal process. That design points onwards to intelligence in designers is a secondary though important question for the scientific causal inference. I do think that there is a place for refusing to grant to the double-standard-using objector manifesting selective hyperskepticism, a veto over that which is plainly cogent. KF kairosfocus
F/N: Gauger on her work with Axe, with onward context: ___________ >>The Real Barrier to Unguided Human Evolution By Ann Gauger [Biologic Institute] Comparing DNA sequences and estimating by how many nucleotides we differ from chimps doesn’t tell us much about what makes us human. Many of those nucleotide differences have no effect, because they are the product of neutral mutation and genetic drift. While these neutral mutations may affect the over-all mutation count, they don’t answer how many mutations are required for the transition from chimp-like to human. This problem is analogous to one we examined concerning protein evolution last year in the BIO-Complexity journal (Gauger and Axe 2011). Converting one protein to another’s function can be viewed as a mini version of converting one species to another. But it is much easier to convert proteins than species. [two proteins pic] We began by identifying two proteins that are close together in structure, but that have distinct functions. We examined what the minimal number of mutations to convert one protein to the other were. If all the places where they differed had to be changed, that would mean we would have to switch 70 % of one protein to achieve conversion to the other’s function. It’s unlikely that all those mutations are required, however, since many if not most of those changes are due to neutral mutation and drift, just like in the chimp-like to human case. So to estimate the minimal number of mutations required for conversion to a new function, we identified and tested the most likely amino acid candidates using structural and sequence comparisons, one by one and in combination. We ended up changing nearly the entire active site to look like the target protein, but failed to achieve conversion. Based on the number of groups we changed, we made a minimum estimate that seven specific mutations would be required for a functional shift to be observed. To get seven coordinated mutations takes way too long, even for bacteria, with their high mutation rate and large population sizes. 10^27 years is our estimate, based on Doug Axe’s population genetics model, also published in BIO-Complexity. Personally, I think the chimp-like to human conversion would have to have taken many more years than any protein conversion, if it happened at all. A few years back Durrett and Schmidt published two papers where they estimated how long it would take to get a single mutation, and then a second mutation to produce an eight base DNA binding site somewhere within a thousand base region near a gene. They stipulated that within the thousand base region there already was a sequence with six out of eight bases matching the target. The reason they chose to examine DNA binding sites? Many evolutionists think that evolution happens by changing gene expression, and changing gene expression most often requires changes to the regulatory regions around genes. Their results? They calculated it would take six million years for a single base change to match the target and spread throughout the population, and 216 million years to get both base changes necessary to complete the eight base binding site. Note that the entire time span for our evolution from last common ancestor with chimps to estimated to be about six million years. Time enough for one mutation to occur and be fixed, by their account. To be sure, they did say that since there are some 20,000 genes that could be evolving simultaneously, the problem is not impossible. But they overlooked this point. Mutations occur at random and most of the time independently, but their effects are not independent. Mutations that benefit one trait may inhibit another. In addition, many if not all these traits are complex adaptations. Each trait requires multiple mutations to achieve a beneficial change. And many of the traits must occur together to be of any benefit. Take for example the changes required for upright bipedalism. Hips, legs, feet, spine, ribcage, and skull all need to work together to allow free and efficient motion. All must be changed. But changing the hips before changing the angle of the legs would not be helpful. Changing to upright posture without lengthening the neck and setting the skull atop the spine would not work. The point is this. There are hundreds of traits that distinguish us from chimps, probably requiring tens of thousands of mutations in total. But even if it takes only 30 or 40 specific trait changes to move from primate to human, and hundred of mutations, the time required would be astronomical. Longer than the age of the universe, actually. Like Sternberg’s argument about whales, the argument from what is required to what is possible shows there just isn’t time enough for it to have happened by unguided means. >> __________ Sobering. And, demanding a sober answer. KF kairosfocus
specification, irreducible complexity,
Those are D theories. We can say something is specified without reference to intelligence. We did that when we asked Nick Matzke about 500 coins, not once did we mention intelligence. We can do the same for IC. The importance of this is we can then critique whether a proposed mechanism (like Darwinian evolution) can create artifacts that are specified. IC systems are a subset of specified systems. D inferences (Design Inferences) are formal, mathematical and unassailable. The only way to assail a D inference is with a Chewbacca defense as illustrated in A Statistics Question for Nick Matzke. ID inferences are circumstantial. Though circumstantial, I believe them to be correct. A fence sitter or someone sympathetic but willing to be persuaded toward ID will find D theory arguments adequately convincing of ID for the simple fact if something looks designed, it must have a designer. People who will insist vehemently otherwise will never be persuaded, and we only argue with them to make them look stupid in front of the fence sitters as we did in A Statistics Question for Nick Matzke. Unfortunately the Darwinists figured out our strategy and Nick and others wised up and didn't even show up for the next simple question: Another question for Matzke
If “seeing” by eyes is the conditio sine qua non of knowledge, then half of modern science is not knowledge and is not believable. Have you ever seen, say, an electron, a black hole, a quark, a multiverse…?
Electrons may not be directly observed, but we can state their properties and predict their behaviors (i.e. mass, charge, spin). Not so with intelligent agencies because they are capricious. It's not exactly right to say that one unobserved entity (intelligence that created life) is in the same class as another unobserved entity (electron). Intelligence is capricious, electrons are predictable or at least predictable in their capriciousness (i.e. Heisenberg uncertainty). Science can make lab experiments with unobserved, but predictable entities. Science will be hard pressed to deal with unobserved, unpredictable, or historical entities. One can still call it science if they insist, but it is not in the same class of verification, even if ultimately true. It's a mistake to equate operational science with historical or circumstantial claims. Evolutionary biologists like Coyne do that, but ID proponents invite unnecessary arguments if they don't recognize the difference. scordova
RB: I suggest the use of a version on the French quote marks, as using the angle bracket that goes the other way has unfortunate effects in WP. As in SOURCE: >>cited>> (Now you know why I do some of those odd little things. BTW, some at TSZ need to know that END at the end of a text file is not just a book convention but tells that the text has in fact completed. I do not want to use the more cryptic hash marks ### ) KF PS: Post scripts and footnotes (as well as end notes) explain themselves, as do numbered arrow bullet points, which trace ultimately back to a Readak course I did 40 years ago. Lists of works cited would be jut too formal. Square brackets will not get funny interpretations from WP or Word. kairosfocus
kairosfocus, Petrushka et al., are making ignorance-based claims about us and Dr Axe. And I can handle myself but I am not going to let those minions attack Dr Axe. Those ilk don't know anything compared with Axe. petrushka's insipidity is well known. He is just an lower class version of Zachriel. Joe
Reciprocating Bill: I apologize for the misunderstanding. I thought you were addressing Petrushka, and giving your personal ideas. So, all my comments in posts #128 and #129 are meant for Petrushka. Again, I apologize. gpuccio
GP:
As you posted your personal comments too, I will try to answer them.
I should have been more clear in my last post. All of the comments appearing under my name above, including 119, are Petrushka's (and 119 does begin, "Petrushka:"). Reciprocating Bill
PPS: SB, GP, while VJT is deservedly getting star headlines, the steady creep up on what is still UD's No 2 most hit post, the WACs (it's not really a normal FAQ), now at 31,718 . . . near enough to where the Dr Tour post took off from . . . are also showing the significance of our joint effort from those several years back. kairosfocus
GP, well said -- as usual. KF PS: Good to see the old troika back pulling together in this thread! kairosfocus
EA, Mapou & RB: Indirect paths dependent on co-optations face Menuge's challenges C1 - 5:
For a working [bacterial] flagellum to be built by exaptation [[--> pulling together diverse parts that work elsewhere, to form a new functional entity], the five following conditions would all have to be met: C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function. C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time. C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed. C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant. C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly. [Agents Under Fire: Materialism and the Rationality of Science (Rowman & Littlefield, 2004), pp. 104-105 . HT: ENV.]
The Apollo 13 rescue after the service module explosion aptly illustrates the sorts of challenges involved and the best solution: high-grade engineering. In short, "adapt the wheel (Don't re-invent it)" is a well known DESIGN strategy, one that often relies on the tendency to standardise components. Indeed, back in 1996, Behe quite cogently observed:
What type of biological system could not be formed by “numerous successive, slight modifications?” Well, for starters, a system that is irreducibly complex. By irreducibly complex I mean a single system composed of several well-matched interacting parts that contribute to the basic function, wherein the removal of any one of the [core] parts causes the system to effectively cease functioning. [[Darwin's Black Box (Free Press, 1996), p. 39, emphases and parenthesis added. Later, he highlights the emergence of such steps by noting: “An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway.” (A Response to Critics of Darwin’s Black Box, by Michael Behe, PCID, Volume 1.1, January February March, 2002; iscid.org)] Even if a system is irreducibly complex (and thus cannot [[plausibly] have been produced directly), however, one can not definitively rule out the possibility of an indirect, circuitous route. As the complexity of an interacting system increases, though, the likelihood of such an indirect route drops precipitously. And as the number of unexplained, irreducibly complex biological systems increases, our confidence that Darwin's criterion of failure has been met skyrockets toward the maximum that science allows. [[Darwin's Black Box (Free Press, 1996), p. 40 . Parenthesis added.]
In effect, a complex, functionally specific organised system can be described by making a nodes- and- arcs 3-dimensional "drawing" or description of parts and the way they are arranged and coupled together, which in turn can be reduced to strings of bits such as is used in AutoCAD and similar Drawing Packages. This boils the task down to a structured string of Yes/No questions, i.e. as has been repeatedly noted, analysis on strings is WLOG. As the string of Y/N questions -- bits -- for a functionally specific system rises to and then exceeds the threshold of functionally specific complex information and/or organisation of 500 - 1,000, the plausibility of finding such an entity by blind chance and/or mechanical necessity . . . Dawkins' Blind Watchmaker . . . rapidly diminishes to the point of vanishing. Not to mention, the challenge of getting incremental mutations to "set" in populations in reasonable times at reasonable mut rates and with reasonable pop sizes. 200 MY for a couple of co-ordinated muts does not work. And remember actual info and function gain muts as well as stepping beyond the roughness and sub optimisation problems are seriously unsolved. The smooth back-slope up Mt Improbable on Continent of Life [--> Isle Isolated] by blind watchmaker mechanisms is a just so story, not empirically well-substantiated fact. Not, where origin of novel body plans is concerned. No wonder Wells argues:
The origin of Species included only one illustration, showing the branching pattern that would result from this process of descent with modification . . . Darwin thus pictured the history of life as a tree, with the universal common ancestor as its root, and modern species as "its green and budding twigs." He called this the "great Tree of Life." [[--> The echo of the one in Genesis and the Revelation is obvious, and shows the shift in pivotal imagery and how it has affected our civilisation. This is similar to the debate about rationalist "enlightenment" vs. the Medieval "dark ages" vs. Johannine light and darkness imagery, which provides significantly unwarranted plausibility for the concept of a war of enlightening rationalistic science with benighted and oppressive religion. Cf. Pearcey's evaluation of this ill-founded metaphor.] Of all the Icons of Evolution, the Tree of Life is the most pervasive, because descent from a common ancestor is the foundation of Darwin's theory. Neo-Darwinist Ernst Mayr boldly proclaimed in 1991 that "there is probably no biologist left today who would question that all organisms now found on the earth have descended from a single origin of life." Yet Darwin knew -- and scientists have recently confirmed -- that the early fossil record turns the evolutionary tree of life upside down. Ten years ago [[-->i.e. in the 1990's] it was hoped that molecular evidence might save the tree, but recent discoveries [[--> of sharply divergent molecular "trees"] have dashed that hope. Although you would not learn it from reading biology textbooks, Darwin's Tree of Life has been uprooted . . . . Darwin believed that if we could have been there to observe the process, we would have seen the ancestral species [[--> e.g. of humans and fruit flies] split into several species only slightly different from each other. These species would then have evolved in different directions under the influence of natural selection. More and more distinct species would have appeared; and eventually at least one of them would have become so different from the others that it could be considered a different genus . . . differences would have continued to accumulate, eventually giving rise to separate families . . . . Thus the large differences separating orders and classes would emerge only after a very long history of small differences: "As natural selection acts only by accumulating slight, successive, favourable variations, it can produce no great or sudden modifications; it can act only by short and slow steps." These "short and slow steps" give Darwin's illustration its characteristic branching-tree pattern . . . . But in Darwin's theory, there is no way Phylum-level differences could have appeared right at the start. Yet that is what [[--> understood on the conventional timeline] the fossil record shows. [[Icons of Evolution, 2000, pp. 29 - 35.]
In the end, the facts will out, and the dynamics of reality will break in on fantasy worlds. So, rhetorical "victories" and manipulation of the public, the media, courtrooms and legislatures may prop up a system that is quite evidently an ideological shadow show for a time. But one day, reality will come a-knocking. For instance, the above thread and the obvious problems Darwinists of whatever stripe are having with grounding their case on evidence in light of the vera causa principle, shows the problem more starkly than critics of blind watchmaker thesis hydrogen to humans via cosmological, chemical, and biological evolutions in train, can put in words that are liable to be drowned out by those who hold power and influence. But, I think a lot of people are waking up and wising up. That's why the current count on VJT's Dr Tour post is 155,170 and rising fast enough that I cannot post in time for that to still be true when I hit post. Soon, the awake and alert are going to act up, and they will not forget just who were ever so busily manipulating them with shadow shows. (BTW, I have put the animation of the parable in the RH column of my personal blog. It also appears in the post I put up in SC's new CEU, in the thread on threadjacking tactics.) KF PS: EA, GP, Timaeus, looking forward to posts. SB and UB, we need to hear a lot more from you folks. UB, launch mon, LAUNCH !!!!! kairosfocus
To Petrushka: I invite you to read my previous post to RB. I will go on from that.
The number of states being tested is always limited to those in the immediate vicinity. That is not an astronomical number. Once a duplication occurs — as in the Lensky experiment — negative selection is pretty much irrelevant. Most mutations in the duplicate are going to be neutral. Again, Lensky.
So, we have two different scenarios here. Please, be simple and do not equivocate between the two: 1) A functional gene can test new states, but only in the "immediate vicinity". Negative NS prevent, here, any traversing of the protein space in the direction of new, different functional islands. The result? The big bang theory of protein evolution: sequences change, while maintaining structure and function. With a bit of luck, we can accept some optimization here of what already exists. Never a new complex functional structure. 2) A duplicated, inactivated gene can test any new state. Negative NS does not apply here. Well, and here the probabilistic barriers apply completely. Mutations are neutral by definition, unless and until a new functional state is reached by mere RV (and it must also in some way be transcripted and translated to have any effect on reproductive fitness, and you must explain how that can happen to a duplicated, non functional gene!). Please, remember that genetic drift is part of RV, and does not change the probabilistic barriers in any way. So, what is your argument? Functional genes can only remain the same, or sometimes be slightly optimized. And non functional genes can go anywhere, but have no reasonable probability of finding a new functional island. What do you think?
Regarding your “Experimental Rugged Fitness Landscape in Protein? Sequence Space” You are the proponent of intelligent selection. You have argued it is more powerful than natural selection, and I have argued the opposite. You present evidence that goal directed design is hard. Your conceptual problem is assuming there is a goal. You ignore sideways meandering.
I don't understand your point. Please, read my previous post to RB. The rugged landscape paper is exactly about the failure of RV + natural selection, with all its possible "sideways meandering". On the contrary, the Szostak paper (and all modern protein engineering) and antibody maturation are exactly about the triumphs of design and intelligent selection. Design, with its top down and bottom up strategies, can achieve all that RV + NS never will. Finally, I still don't understand your argument about languages. I certainly agree that natural languages are not designed by a single person, but certainly they are formed and evolve because of the interactions of all the conscious intelligent people who speak them. That is not a "non design system". Understanding and purpose are an integral part of the system. Therefore, the evolution of the system is largely designed, although not in the same sense of a poem or a software, and not by a single designer. Therefore, again, your attempt to derive conclusions about non designed non conscious systems from a system where consciousness, meaning and purpose are an integral part is really misleading. gpuccio
Reciprocating Bill: First of all, I want to really thank you for taking the time to post Petrushka's comments here. I appreciate that very much. As you posted your personal comments too, I will try to answer them. That will answer in part also some of Petrushka's arguments, even if I will try to asses them directly in next post. First of all, I think that you are wrong in seeing the rugged landscape as an obstacle to directed evolution while it would allow non directed evolution. That is completely wrong. I must remind you that the rugged landscape paper I quoted is essentially a good model of NS, not of IS. The reason for that is that the property used for enrichment after each cycle of random mutation was infectivity, which can be considered, for the phage, a good equivalent of reproductive fitness. Therefore, what we are observing and measuring is the ability of RV + NS to reach optimization of an existing function, because the original setting of the experiment, with the introduction of the random peptide as one domain of the g3p protein, reduced infectivity, but did not extinguish it. So, we have two aspects here: a) The initial setting is designed, and the lab procedures are designed, but: b) What is being measured in the evolution of the system is essentially the result of RV and natural selection. With all that, and even with the help of an existing function at the beginning, which should only be optimized (there is nothing like the emergence of a new function here!), still RV and NS cannot achieve real optimization of the function, and fall completely short of the wild type protein. And the authors clearly state that initial population of huge size (of the order of 10^70) would be necessary to overcome the probabilistic obstacles posed by the rugged landscape to this simple optimization process. Therefore, these are the limits of NS, even when using all your beloved "sideways". How could Intelligent Design and Intelligent Selection improve this scenario? It's really simple. ID uses understanding and purpose as its tools. Even if the designer uses bottom up procedures, and therefore a random search, he integrates it with intelligent selection. That makes the procedure much more powerful. That was the case, for example, in the famous Szostak paper, where intelligent selection achieved important results in few passages. The difference is extremely important: a) In the rugged landscape paper, the enrichment is based on infectivity, which is reproductive fitness. So, we are testing NS (or at least a good model of it). b) In the Szostak paper, the enrichment is based on direct measurement of the desired property (the ability to bind ATP), even at very low levels. The important point is that in that way you quickly achieve a molecule which strongly binds ATP (the desired result). However, such a molecule remains completely useless for reproductive fitness (as showed by further studies). Another example of how IS can overcome the difficulties inherent in NS is antibody maturation. Here, a definite intelligent algorithm mutates the existing antibody (in a very targeted and controlled way), and selects for affinity to the epitope (which is available in the system). In that way, high optimization of the existing molecule is achieved in a few months. It is one of the best examples of how an intelligent algorithm, which has a definite purpose and can use information inputted into the system (the epitope) can achieve extraordinary results in a very short time. Just think how the process could work if the selection of each mutated molecule were left to a reproductive advantage of the individual, with each new configuration simply transmitted to the descendants. That would never work. That is the huge difference between NS and IS. IS uses knowledge and understanding, bothin the initial setting of a procedure and in the following stages. It is infinitely more powerful than RV and NS. That is the simple truth. gpuccio
Stephen and Optimus: Thank you the kind words! And my congratulations to Eric and Timaeus. It's great to have such friends in our group :) gpuccio
Barry: I just saw your post #117. Thank you, I am really honored. This is strong motivation to work more :) Thank you ahain! gpuccio
Eric Anderson @116:
It is therefore supremely unhelpful for anyone who is hoping to advance the debate or bring clarity to the discussion to conflate the two and claim that ID somehow includes or “merges” these second-order questions with the purely objective and scientific inquiry about whether design is detectable. It is extremely unhelpful for public perception, and it is wrong logically.
Questions about the identity or methods of the designers may be second order to the design inference but, in the greater scheme of things, they are the first order questions that need to be asked. The problem with the ID side of the debate is that they have allowed the opposition to dictate how the debate should be conducted. This is weak, in my opinion, and it's probably the main reason that you are not winning. In fact, I would say that, politically, you are losing and may have already lost. But I would not worry too much about all of this. Those who designed life on earth are certainly aware of what's going on and I'm sure events are transpiring just as they should. But then again, this is just one man's my opinion. Mapou
Reciprocating Bill:
I would argue that straight line paths up Mount Improbable are very improbable. This makes goal directed design very hard and very unlikely. But it doesn’t impede evolution, because evolution is not goal directed. Evolution wanders sideways more than upward.
I'm trying to make sure I understand this. It sounds like the claim is that evolution is more likely to arrive at a functional island because it wanders aimlessly and has no goal or direction? Eric Anderson
Upright Biped has made a very good point.
Yeah, he has a habit of doing that. :) Chance Ratcliff
Congrats, EA, GP and Timaeus! I greatly look forward to reading your OPs. Optimus
Eric, just click on new post above and type in your post. Pretty intuitive. Barry Arrington
Eric, You're certainly worthy of being an author. Sal scordova
Petrushka: Friday nights are generally rather slow, and I may not be available to post much for the next 24 to 36 hours. So I’m going to summarize where I think the discussion has been. The most convincing argument I see is against any straight path from minimal functionality to optimal functionality. Call it the problem of local maxima. To this I would respond that this is an argument against design. In particular it is an argument against gpuccio’s concept of directed or intelligent selection. I would argue that straight line paths up Mount Improbable are very improbable. This makes goal directed design very hard and very unlikely. But it doesn’t impede evolution, because evolution is not goal directed. Evolution wanders sideways more than upward. It does not “need” to leap out of local maxima, because populations are always alive and functional, or they would be extinct. When something in the environment changes and makes adaptation necessary, the most likely result is extinction. Intelligent selection is not likely to come to the rescue. The Lensky experiment directly addresses this problem and illustrates how evolution can work sideways and sometimes find a path around a barrier. But it is important to note that the populations in Lensky’s experiment were not goal directed and did not need to change. Nor was there any way for an intelligent selector/designer to know which neutral mutation would be the enabler. This is the fatal flaw in Axe’s approach. He does not consider sideways evolution. Reciprocating Bill
StephenB, thank you for the kind comments. Barry, I can't promise to generate many new threads or to spend a lot of time, but I would be honored if it is a simple process to get it set up. I can be reached at my email address on file at UD. Thanks, Eric Anderson
GPuccio, Eric Anderson, and Timaeus should be given posting privileges.
Agreed. They now have them. Barry Arrington
Mapou @109: Intelligent design has been defined by the primary proponents of ID (Dembski, Behe, Meyer, et al.) as the idea that "certain features of the universe and of living things are best explained by an intelligent cause rather than an undirected process." Period. That's it. Yes, that inference includes -- by definition if something is designed -- a reference to the existence of a designer, but it does not get into questions about the designer's intent, identity, purposes, desires, motives, methods or otherwise. These second-order questions may be interesting in their own right. And an affirmative answer to the design question may have implications for some of these second-order questions, but they are logically distinct and separate and must be recognized as such. The fact that a forum like UD hosts various threads and contains comments and tangents, including from those who desire to delve into these second-order questions, has nothing to do with whether or not these issues should be kept carefully separate. I will be the first to acknowledge that the second-order questions are interesting, but they must not be conflated with the fundamental questions that intelligent design asks. A tremendous amount of effort, time, energy, and spilled ink has been spent by the primary proponents of intelligent design to make sure everyone is clear on this point. Unfortunately, as anyone familiar with the debate knows, and as UB has aptly pointed out, one of the primary ploys of anti-ID rhetoric is to conflate the question of design detection with secondary questions about the identity, intent, methods, motives, etc. of this or that putative designer. It is therefore supremely unhelpful for anyone who is hoping to advance the debate or bring clarity to the discussion to conflate the two and claim that ID somehow includes or "merges" these second-order questions with the purely objective and scientific inquiry about whether design is detectable. It is extremely unhelpful for public perception, and it is wrong logically. Eric Anderson
By the way, he would make an excellent UD author
Stephen, UB has posting privileges here. Like someone else I could mention, he prefers to do most of his writing in the comboxes. Barry Arrington
Upright Biped has made a very good point. (By the way, he would make an excellent UD author). In my judgment, ID supporters worry far too much about how unfair readers will interpret and characterize their arguments. They should design their message for fair readers and let the chips fall where they may. One thing they should not do is try to anticipate irrelevant objections about the designer's capacities or worse, answer those objections in a way that makes them appear legitimate. Another thing they should not do is split the ID argument into two pieces (intelligence arguments vs design arguments) as if Stephen Meyer's historical approach was incompatible with William Dembski's mathematical approach. As UB points out, we need to get back to basics, which is best characterized by ID's definition of itself: "The theory of intelligent design holds that certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection." At the same time, I insist that ID proponents should also be familiar with the philosophical arguments that support their theories. If our adversaries cannot reason properly, a fallout from an anti-intellectual culture, we should attend to these deficiencies as well. Scientific arguments alone cannot address that defect. Still, we should not contaminate the scientific arguments by injecting philosophical arguments into their substance. The two points should be used individually and in concert, not as part if the same argument. StephenB
Upright BiPed @111:
To anyone consisdering the idea of relaxing ID standards at this critical time, may I suggest you revisit SunTzu, or Boyd, or even Ries. It is absolutely not the thing to do.
I think this is a strawman because nobody here is asking for a relaxation of ID standards. There is, however, a dire need to bring additional theories or complementary hypotheses into the fight. Mapou
SB: I agree. KF kairosfocus
To anyone consisdering the idea of relaxing ID standards at this critical time, may I suggest you revisit SunTzu, or Boyd, or even Ries. It is absolutely not the thing to do. Upright BiPed
egads! Just now revisiting my 4:00am post from earlier this week, I was awfully snarky. My apologies to Dr Torley for my tone. That was not my intention, and I regret my wording. No one at UD need question my respect for Vincent, or the service he provides to UD on a regular basis - I believe I have been more than just casually vocal about my great appreciation of Dr Torley for years here. He knows this himself. While I regret my wording, I must stand by the underlying conviction. ID has a formidible challenge. A major part of that challenge is a constant will on the part of ID critics to misrepresent what ID actually is. This is the very heart of the ID critic's very successful social, politial, and legal defense against ID. It stands in the place of their not having a truly science-based response to ID's core claims of specification, irreducible complexity, and the actual fossil record (to which I would like to add semiosis as well). The challenge for ID is therefore not lessened by being less disciplined than our critics about those core claims. There is just simply no room for it. So again, my apologies to Dr Torley, but for myself, I cannot and will not follow in that direction. Upright BiPed
Eric Anderson @100, replying to scordova:
Other questions, what you referred to on the previous thread as “theories of intelligence” (identity, timeframe, how acted, personality, and so on of the designer), are interesting second-order questions, but not part of the design inference.* That these questions can, and indeed must, be kept separate from the design inference itself has been very clearly stated by the major ID proponents from day one. That is why I can’t understand this insistence on claiming otherwise. If your goal is to be provocative in a pro-ID forum, you have certainly accomplished that, but at the expense of clarity and sound reasoning.
I think I understand where Cordova is coming from and I sympathize with his concerns. ID is a misnomer. It should be renamed DI (design inference) instead. And this forum is more than being only about the design inference. I see all sorts of POVs being promoted here, some good and others not so good, IMO. There is lot that can be said about the intelligence aspect of design that would be very useful in the fight against materialism and Darwinism. For example, we know from observing design among humans that designs evolve over time and can be classified hierarchically like a tree. We know that intelligent human designers do not restrict themselves to a strictly nested evolutionary tree of designs. We know that various branches of the tree can be horizontally grafted to distant branches of the same tree, if desired. Indeed, this is what is observed in nature. This kind of thinking is extremely useful and should not be downplayed just because it does not fit the design inference. Mapou
UD administrators: I believe that GPuccio, Eric Anderson, and Timaeus should be given posting privileges. Can there be any doubt that these three individuals grasp ID principles in detail and also from a big-picture perspective? Do they not always communicate in a spirit of friendliness and mutual respect even as they take their adversaries' arguments apart? StephenB
RB, understood. KF kairosfocus
Joe: Please tone down the verbal voltage, I don't think it is helpful, thanks. KF kairosfocus
GP: I thought you had made original posts? KF kairosfocus
petrushka most likely believes that all books arose from one original book via slight changes when the original was being manually copied:
I imagine this story being told to me by Jorge Luis Borges one evening in a Buenos Aires cafe. His voice dry and infinitely ironic, the aging, nearly blind literary master observes that "the Ulysses," mistakenly attributed to the Irishman James Joyce, is in fact derived from "the Quixote." I raise my eyebrows. Borges pauses to sip discreetly at the bitter coffee our waiter has placed in front of him, guiding his hands to the saucer. "The details of the remarkable series of events in question may be found at the University of Leiden," he says. "They were conveyed to me by the Freemason Alejandro Ferri in Montevideo." Borges wipes his thin lips with a linen handkerchief that he has withdrawn from his breast pocket. "As you know," he continues, "the original handwritten text of the Quixote was given to an order of French Cistercians in the autumn of 1576." I hold up my hand to signify to our waiter that no further service is needed. "Curiously enough, for none of the brothers could read Spanish, the Order was charged by the Papal Nuncio, Hoyo dos Monterrey (a man of great refinement and implacable will), with the responsibility for copying the Quixote, the printing press having then gained no currency in the wilderness of what is now known as the department of Auvergne. Unable to speak or read Spanish, a language they not unreasonably detested, the brothers copied the Quixote over and over again, re-creating the text but, of course, compromising it as well, and so inadvertently discovering the true nature of authorship. Thus they created Fernando Lor's Los Hombres d'Estado in 1585 by means of a singular series of copying errors, and then in 1654 Juan Luis Samorza's remarkable epistolary novel Por Favor by the same means, and then in 1685, the errors having accumulated sufficiently to change Spanish into French, Moliere's Le Bourgeois Gentilhomme, their copying continuous and indefatigable, the work handed down from generation to generation as a sacred but secret trust, so that in time the brothers of the monastery, known only to members of the Bourbon house and, rumor has it, the Englishman and psychic Conan Doyle, copied into creation Stendhal's The Red and the Black and Flaubert's Madame Bovary, and then as a result of a particularly significant series of errors, in which French changed into Russian, Tolstoy's The Death of Ivan Ilyich and Anna Karenina. Late in the last decade of the 19th century there suddenly emerged, in English, Oscar Wilde's The Importance of Being Earnest, and then the brothers, their numbers reduced by an infectious disease of mysterious origin, finally copied the Ulysses into creation in 1902, the manuscript lying neglected for almost thirteen years and then mysteriously making its way to Paris in 1915, just months before the British attack on the Somme, a circumstance whose significance remains to be determined." I sit there, amazed at what Borges has recounted. "Is it your understanding, then," I ask, "that every novel in the West was created in this way?" "Of course," replies Borges imperturbably. Then he adds: "Although every novel is derived directly from another novel, there is really only one novel, the Quixote." - David Berlinski "The Denial Darwin"
Joe
Again petrushka demonstrates equivocation and ignorance. Some dialog this is. Please tell us how it was determined gene duplication is a blind watchmaker process (see "Not By Chance" by Dr Spetner). Then please tell us how you determined Axe didn't take gene duplication into account. Onto natural selection which has never been demonstrated to do anything vs goal-oriented searches which have been proven very powerful. Enough said. Then we have the bloviating about language. Joe
Petrushka responds to Gpuccio (responses offered in three consecutive posts are here posted as one): gpuccio:
The opposite is not true. Neutral mutations cannot help in traversing the space between islands of functionality, for two important reasons: a) They can go in any possible direction, because they are random. Even considering the allelic effect of genetic drift, the result remains completely random. Therefore, neutral mutations in no way help in overcoming the probabilistic barriers: the number of possible states to be tested remains the same. b) Negative NS definitely acts against the traversing. When you argue about “lose money and argue that because he lost, it is impossible to win”, you are forgetting the exacting role of negative NS.
The number of states being tested is always limited to those in the immediate vicinity. That is not an astronomical number. Once a duplication occurs — as in the Lensky experiment — negative selection is pretty much irrelevant. Most mutations in the duplicate are going to be neutral. Again, Lensky. Until Axe accounts for the combined possibilities of duplication and drift, his work is not probative. ----- Regarding your “Experimental Rugged Fitness Landscape in Protein? Sequence Space” You are the proponent of intelligent selection. You have argued it is more powerful than natural selection, and I have argued the opposite. You present evidence that goal directed design is hard. Your conceptual problem is assuming there is a goal. You ignore sideways meandering. ----- gpuccio:
6) Regarding languages, I don’t think I understand your point. Languages are structures which are formed and evolve in conscious intelligent beings. Why do you use them as models for non designed evolution?
Because languages evolve. They are not designed. Designed languages like Esperanto do not survive in the wild. I do not know for certain your native language, but you are familiar with English. Despite centuries of effort by designers, no one is in charge of English spelling and grammar. Efforts to rationalize and standardize language fail over time. Usage and spelling change. ID proponents like the analogy with computer code and like to point out that a single bit error in a computer program will crash the system. But language is a much better analogy. Errors occur all the time and do not crash the system. Some of them get incorporated into the language. Over time, languages change to the extent that speakers of a language can’t read or speak the earlier version. Now about Basque. Unless you believe that the Basque language was poofed into existence by a designer, it evolved by incremental change. But it has no apparent continuity with any other extant language. It has no living cousins. It is a demonstration that a system of coding sequences can change incrementally over time, to the point that cousinship cannot be determined. Reciprocating Bill
petrushka on language:
Because languages evolve. They are not designed.
I would love to see any language "evolve" without the help of intelligent agencies. As I said there isn't any debating people like petrushka. Joe
Sal:
Fine, then we just say I theory and D theory, and never use the term ID theory again.
Sal, I didn't mean to be hurtful. I just can't understand where the disconnect is. The only thing that necessarily follows from the design inference is the existence of a designer -- one that can, by the very etymology of the word "intelligence", choose between contingent possibilities. That is all. So we don't throw out the concept of "intelligent" in ID; we just need to understand what it refers to and its limitations. Other questions, what you referred to on the previous thread as "theories of intelligence" (identity, timeframe, how acted, personality, and so on of the designer), are interesting second-order questions, but not part of the design inference.* That these questions can, and indeed must, be kept separate from the design inference itself has been very clearly stated by the major ID proponents from day one. That is why I can't understand this insistence on claiming otherwise. If your goal is to be provocative in a pro-ID forum, you have certainly accomplished that, but at the expense of clarity and sound reasoning. I don't have any problem with someone wanting to investigate the second-order questions. I'm not even particularly exercised about someone claiming, as you have done, that they aren't "science" (though good arguments can be made that some of those second-order questions should reasonably be treated as "science"). What I am concerned with is the conflation of the two, the suggestion that ID "merges" the design inference with second-order questions about the designing intelligence. Such talk muddies the waters, confuses sincere students who are interested in learning about ID, and gives unnecessary ammunition to critics who themselves love to conflate the concepts. It is both inaccurate from a logical and theoretical point of view and harmful from a practical and political point of view. Let's stop suggesting that ID has "merged" the two. It hasn't. The design inference is extremely simple -- and limited. That is the benefit and the usefulness of the concept. Let's keep it that way. If someone wants to investigate second-order questions, fine. But when we talk about ID let's be clear that if we start delving into second-order questions that we are doing so and that ID neither requires nor depends upon such an inquiry. ----- See in particular my comments #22 and #25 on the this thread: https://uncommondescent.com/philosophy/the-d-of-id-is-science-lessons-from-our-dealings-with-nick-matzke/ Eric Anderson
And to prove petrushka is clueless (discussing Lenski and citrate):
The key element of this discussion is the one or two enabling mutations. Unless Axe can bridge this gap, his work is not relevant to supporting or undermining evolution.
1- Axe isn't trying to undermine mere evolution 2- Lenski's citrate digesting bacteria didn't involve any new proteins 3- It didn't involve any new protein folds 4- No one can say if it had anything to do with blind watchmaker evolution. All that happened was an existing gene that encoded for a protein that aids in getting the citrate into the cell when O2 is not present, ie an anaerobic environement, was duplicated and put under control of a promoter that was active in the prsence of O2. petrushka is happily clueless wrt what is being debated. Pathetic. Joe
And this should answer OMagain's question:
First, as observed in Table ?Table1,1, although we might expect larger proteins to have a higher FSC, that is not always the case. For example, 342-residue SecY has a FSC of 688 Fits, but the smaller 240-residue RecA actually has a larger FSC of 832 Fits. The Fit density (Fits/amino acid) is, therefore, lower in SecY than in RecA. This indicates that RecA is likely more functionally complex than SecY.
From Measuring the functional sequence complexity of proteins (see results and discussion) Specified information has been measured and CSI has been detected wrt biology. So please stop lying and saying that it hasn't been done. Joe
Here are two of those papers again, complete with math: Is Intelligent Design Required for Life? Functional information and the emergence of biocomplexity Joe
Sal:
Fine, then we just say I theory and D theory, and never use the term ID theory again.
No Sal, you are the problem, not ID. The I and D go together for the many reasons already provided. Don't tell me that you are turning into an evo-type and are going to ignore the reasoning presented. The Intellignet" in Intelligent Design is just describing the type of design. Joe
And as predicted the TSZ ilk stuill refuse to read the references I provided that demonstrate how to measuree biological information. They are a pathetic little people. Heck Mike Elzinga thinks that all we are is condensed matter. And he complains that we don't have a testable methodology- LoL! no one can test the claims of materialism. Joe
I liken petrushka and all evolutionists to losers to whine and cry about their opposition all the while unable to do anything to support the claims of their own position. Joe
LoL! DR Axe knows more about molecular biology than all the TSZ ilk put together. And only incompotent morons think that calling the genetic code an actual code would think that is a metop[hor or analogy. And RB is one of the fools who denies the genetic code is an actual code. Joe
KF:
RB: I will make a few comments...In short, by failing to actually tackle the challenge head on with empirically well founded responses, P created an impression of greater plausibility than is warranted.
Just to clarify, those responses (in 82, above) are wholly Petrushka's. Reciprocating Bill
KF: The link was right, but there was an "http" too much! (just a typo). Here is the corrected link: http://www.tbiomed.com/content/4/1/47 By the way, I have not OP posting privileges. The only problem here is that it would be interesting to discuss with Petrushka or with other reasonable opponents, but I have neither the time nor the will to go on with discussions at TSZ, or with parallel discussions, as I have already done in the past. I believe that the level of discussion here has become lower, because of the absence of gifted opponents. Just my humble opinion. gpuccio
PPPS: Googled the URL, saw the 2007 pdf here: http://www.tbiomed.com/content/pdf/1742-4682-4-47.pdf Maybe we need to update links? Wayback Machine, final pull as at Sept 2013: http://web.archive.org/web/20130927111508/http://www.tbiomed.com/content/4/1/47 kairosfocus
PPS: The Durston paper link seems dead, which one are you linking? kairosfocus
PS: Oddly, that sounds a lot like the gap between immediate and long term functional advantage that underlies ever so much of the exchanges and debates about design. Foresight allows long distance purposeful action that passes on immediate advantage to gain in the long term. AKA the problem of sub-optimisation. kairosfocus
GP: You too hold OP posting privileges, so you will easily be able to see that VJT, you and I do not have moderator powers in general; to test, go to your admin page and see if you can call up the full list of most popular posts beyond the first tier. (We do have power to release comments in moderation in our own threads.) The decision to restore commenting privileges -- not a right -- lies in other hands and if P was banned for cause or put on mod, there is going to have to be a balance of considerations. And if there is such a cause, maybe the point that posting privilege is valuable for the future will help restrain from heckler type behaviour in the present. Which, would contribute materially to improving the tone of discussion on these matters. KF kairosfocus
RB: I will make a few comments. First, pardon, but most of the above is tangential to the main matter on the table even as can be seen from the agenda of points by P as cited by VJT, and even to the focal issues in Axe and Gauger's work . . . in a lab BTW that seems to have been developed through the despised DI CSC. So, it runs the risk of being a red herring led away to a strawman, already 2/3 of the notorious trifecta fallacy pattern. Whereupon, hostility easily supplies the ad hominem part. As for the hostility issue, I suggest you take a serious look at how the RTH thread looks. Now, I suggest a glance at Axe, Gauger & Luskin, Sci and Human Origins (DI Press, 2012) and onward links, to answer some of your technical questions, from p. 32 on, on some recent protein studies that started with single mutation studies of shifting from one function to another with similar protein structures. In summary, if a single mutation between two similar proteins leads to a valley of inferior function with many local peaks and valleys that may frustrate moving from A to B problem, multiple muts are likely to face more of the same, cf. Figs 2.2 and 2.3 pp. 37 & 38. We already know from Behe's work, that multiple muts for evolutionary steps becomes ever more implausible, especially in light of fixing under pop genetics conditions that are realistic. Recall, too the problem of fixing a pair of co-ordinated muts in a pop leading to enormous times to take small steps. I cite Axe, p. 40:
Darwin's engine often moves away from invention in its short-sighted pursuit of immediate fitness gains [--> as in Behe's first rule of evo] . . . . [it] can't climb a peak corresponding to a new invention unless that peak happens to be remarkably close to its current location -- closer that the peak-to-peak distance between any pair of proteins that we know of [--> he makes a remark on provisionality of such empirical knowledge in context] with distinct functions. Even if such an extraordinary case were to be found, it would be just that -- an extraordinary case. Traversing long distances would still depend on a very long and well coordinated succession of extraordinary cases, which amounts to nothing short of a miracle.
In short, the single mut case studied here has much to say to the hoped for multiple co-ordinated mut case. Such is a hoped for mini hopeful monster. Next, language evolution, of course, is an intelligent process, and so is irrelevant to a blind watchmaker process. It does not proceed by small, blind changes in phonemes that perchance happen to make better sense. A closer "analogy" would be trying to move from a hello world program to a complex control system by incremental single ofr several chance changes, hoping for progress at each stage, so that culling out processes will favour the intermediates. One favourable change is hard to find per needles in haystacks, several together that happen to be neatly co-ordinated, or that get commented off for a time then voila without step by step culling for success find themselves in a functional increment and get turned on back together, will be of much lower odds, and the process becomes indistinguishable from a materialist miracle in a quasi infinite multiverse. P's analogy of an investor unwilling to accept temporary losses again becomes irrelevant. As an MBA I can assure you that (as common sense will tell) investment is not a blind watchmaker process. Indeed, one of the principles is that expertise and associated inside knowledge and skill . . . valuable ideas, skills and synergy . . . creates value that is not readily perceptible to outsiders. So, cash burn to create the platform for takeoff is a risky investment, but sharp venture capitalists have made 20% ROI from hanging around watering holes and buying rounds for engineers crying in their cups; by judging the quality of the men and their determination to succeed against all odds -- a process that is intensely dependent on quality of long-term insight and foresight and sufficient familiarity to spot something that will credibly succeed. The notion that such is analogous to blind incremental survival of the fittest and/or luckiest leading to allele frequency shifts accumulating incrementally into body plan transformation in plausible timespans on reasonable mut rates, breeding pop sizes and gen spans, is simply not plausible. But then, all the way back to Darwin's analogies on breeding, that sort of argument by analogy has been used by advocates of evolution. As to the slipping in of that subtly dismissive term, "analogy," that too is to be corrected. Genetic information is coded, functionally specific info instantiated, not a mere analogy to it. Indeed, it is object code in a cluster of machine languages for protein building, ribosome and RNA building, regulation etc. Object code supported by and processed in clever molecular nanotech implementing machines that make Tour's nanocar look simplistic. To view and treat it and its protein derivatives in that light using tools of info theory is not argument by analogy, but analysis of a naturally occurring instance that on the usual timelines is what, maybe 3.5+ billion years old. Yes, codes, algorithms, co-ordinated support and execution machinery have been on the table since the origin of carbon chemistry, aqueous medium, cell based life in a cosmos fine tuned for that in dozens of ways. Why else do you think that ducking the OOL challenge is so significant as a factor in how P et al have gone off-course in their arguments? Do you think I went to the trouble of finding the Smithsonian modern tree of life with OOL as its root just on a whim? OOL is the root of the matter, and OOL has to come to grips with the known molecular nanotech of the cell and the sorts of precursors, contingencies and config spaces that are relevant to warm little soupy ponds or volcano vents or comets or gas giant moons etc. The undoing of diffusion multiplied by the minimum threshold to achieve encapsulated metabolism (with smart gating) joined to a code using vNSR already tell us the only credible, vera causa plausible source of such FSCO/I is design. Design sits at the table as a serious candidate best explanation for OOL. And once it is at the table, it is there for OOBPs too up to our own. A pint reinforced by the pop genetics challenge of single or multi-step cumulative muts. In short, by failing to actually tackle the challenge head on with empirically well founded responses, P created an impression of greater plausibility than is warranted. Incomplete, more work needed. KF kairosfocus
To Petrushka: Well, I was right in missing you! Very interesting questions indeed. First of all, let's talk of the protein space and Axe's work. I understand your point, and it is a well known point. The proteins space is very complex, and very different sequences can generate very similar structure and function, as we see in protein families where distant members of the family can retain the function and structure, while having only partial homology (only 30-40%, or even less). By the way, that is my favorite observation in favor of common descent. So, as I think I understand well your point, I will try to answer it directly with a few brief comments. We can obviously deepen the discussion on each of them. 1) The fact that compensatory multiple mutations can give a different result than single mutations is exactly the reason why the functional protein space is so complex, huge, and difficult to explore. That is, however, no reason not to try. 2) Axe's approach is a top down approach. It is interesting and meaningful, but you are right in saying that it cannot entirely solve the problem of multiple compensatory mutations. That is a limit, but in no way it reduces the importance and value of Axe's results. 3) There are, however, bottom up approaches which give even more significant results. I will quote here the two works which are most important, IMO. 4) The first is, obviously, Durston's paper: http://http://www.tbiomed.com/content/4/1/47 5) The second is the famous rugged landscape paper: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0000096 I quote here what I wrote to Elizabeth in the past: "Just consider the rugged landscape paper: “Experimental Rugged Fitness Landscape in Protein Sequence Space” “Although each sequence at the foot has the potential for evolution, adaptive walking may cease above a relative fitness of 0.4 due to mutation-selection-drift balance or trapping by local optima. It should be noted that the stationary fitness determined by the mutation-selection-drift balance with a library size of N(d)all is always lower than the fitness at which local optima with a basin size of d reach their peak frequencies (Figure 4). This implies that at a given mutation rate of d, most adaptive walks will stagnate due to the mutation-selection-drift balance but will hardly be trapped by local optima. Although adaptive walking in our experiment must have encountered local optima with basin sizes of 1, 2, and probably 3, the observed stagnations are likely due only to the mutation-selection-drift balance. Therefore, stagnation was overcome by increasing the library size. In molecular evolutionary engineering, larger library size is generally favorable for reaching higher stationary fitness, while the mutation rate, d, may be adjusted to maintain a higher degree of diversity but should not exceed the limit given by N=N(d) all to keep the stationary fitness as high as possible. In practice, the maximum library size that can be prepared is about 1013 [28,29]. Even with a huge library size, adaptive walking could increase the fitness, ~W , up to only 0.55. The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness.” Emphasis mine. Don’t you think that “a library size of 10^70? (strangely similar as a number to some Axe’s estimate) for “35 substitutions to reach comparable fitness” (strangely similar to my threshold for biological dFSCI) means something, in a lab experiment based on retrtiving an existing function in a set where NS is strongly working?" I would appreciate your thoughts on that. 6) Regarding languages, I don't think I understand your point. Languages are structures which are formed and evolve in conscious intelligent beings. Why do you use them as models for non designed evolution? 7) Finally, a comment about the role of neutral mutations, which I believe are important to your views. I have always been very clear about that. Neutral mutations do exist, and they allow the "drift" which causes the differences in similar proteins in the course of natural history (the proteins with similar structure and function and different sequence). That is the "big bang theory of protein evolution", which I accept, and which is the strongest point in favor of common descent, as I have already said. That mean that neutral mutations can well change the sequence inside the island of functionality, because negative NS allows that. The opposite is not true. Neutral mutations cannot help in traversing the space between islands of functionality, for two important reasons: a) They can go in any possible direction, because they are random. Even considering the allelic effect of genetic drift, the result remains completely random. Therefore, neutral mutations in no way help in overcoming the probabilistic barriers: the number of possible states to be tested remains the same. b) Negative NS definitely acts against the traversing. When you argue about "lose money and argue that because he lost, it is impossible to win", you are forgetting the exacting role of negative NS. I am looking forward to your answers. I would really like that you could post here directly (VJ, can you do that, at least for this post?), or that someone goes on posting your answers here for you. gpuccio
Dionisio @ 60: well said. KF kairosfocus
Sal, I sure wish you would stop beating this drum. The idea that ID “merges” design theories and theories of intelligence is not helpful. Particularly not when the quote from Dembski that you provided a while back to support your assertion says precisely the opposite — namely, that they can, and should, be kept carefully separate.
Fine, then we just say I theory and D theory, and never use the term ID theory again. :roll: scordova
Petrushka asked that I relay the following responses: I addressed a couple of questions specifically to VJT and gpuccio. I think there’s less animosity in that direction and it might be more productive. 1. Douglas Axe appears to be a competent lab technician. My question is, would his protocol have found the citrate metabolism sequence found by Lensky’s population? I’m interested in whether Axe has experimentally considered multiple enabling mutations. If not, of what probative value is his work? 2. As long as they are arguing from analogy and metaphor, I’d like to ask gpuccio what language is spoken in Italy, and what language was spoken in Italy 2000 years ago, and whether the transition was a saltation. And while on the subject of language evolution, I’d like to ask him whether Basque was created by the designer to have nothing in common with any other known language. 3. Another analogy: I would liken Axe to an investor who is unwilling to accept any temporary losses. What Axe does with protein evolution is argue that an investor cannot make money unless the value his portfolio increases with every trade and with every change in the market. No sideways trades allowed, and no temporary losses. What Axe does with his experimental work is lose money and argue that because he lost, it is impossible to win. If VJT or gpuccio would respond, I’d appreciate it. Reciprocating Bill
Eric, Until petrushka stops equivocating and puts together a model that actually addresses the entailments of evolutionism, and thus addresses kairosfocus's challenge, he shouldn't get a forum here. petrushka won't even admit that ID is not anti-evolution. keiths at least acknowledged unguided evolution was the position that required defending. He just totally blew it by relying on Theobald's evidences for macroevolution which even Theobald says do not support any mechanism. So let petrushka back to defend his diatribe. It will just be more of the same. He definitely won't post a way to model unguided, ie blind watchmaker, evolution. It will all be an equivocation. And all attempts to correct him will be ignored, just as he has been ignoring them for years. The point is there is nothing new coming from petrushka. And he definitely isn't going to grasp what we say just because he is allowed back. Just sayin'... Joe
EA: I of course cannot speak for UD's mods. However, when I issued the challenge, P had comment privileges here. I doubt that loss of such privileges -- if that is so (is it, UD mods?) -- would have been without fairly serious cause. On correction, the response was not actually posted here but at TSZ; VJT who seems to monitor there picked it up and reposted here. There is a parallel discussion, linked from here. Having been there to see how the challenge was made, I found the general context so nastily personal and willfully misleading that I posted some correctives above. I will not try to comment at TSZ so long as it drifts into the heckler's convention mentality I saw; life is too short for me to waste time and energy dealing with behaviour like that; other than to set the record straight. That said, if you or any other person see responses or remarks there that need a FTR or FYI or point-by-point response on merits or even know where any earlier actual overall response by P or another person is, why not clip-link such here? For that matter, if you have your own thoughts, those too would be welcome. KF kairosfocus
In response to Petrushka's comments:
Evolution is the better model because it can be right or wrong, and its rightness or wrongness can be tested by observation and experiment.
ID can also be right or wrong, so no advantage there. Observation and experiment have never shown any of the larger claims of evolution to be correct. No new body plans, no information-rich systems, no complex functional machines have ever been observed or seen by direct experimentation to come about through alleged evolutionary mechanisms. All of the interesting questions about evolution lie at the end of a long trail of inferences, suppositions, and speculations. The only way Petrushka’s first paragraph even comes close to being true at first blush is due to the rhetorical trick of defining “evolution” so broadly that it encompasses virtually everything. No-one doubts any of the minor observational evidence (finch beaks, bacterial resistance, peppered moths and so on). No-one has every observed or demonstrated the required major evidence.
For evolution to be true, molecular evolution must be possible. The islands of function must not be separated by gaps greater than what we observe in the various kinds of mutation. This is a testable proposition.
Agreed. This has not been demonstrated, is highly unlikely, and is subject to considerable doubt.
For evolution to be true, the fossil record must reflect sequential change. This is a testable proposition.
Perhaps. But then of course folks like Gould helpfully proposed things like punctuated equilibrium, which essentially hypothesized that we don’t see a lot of sequential change in the fossil record because – wouldn’t you know it – evolution seems to be always taking place just out of reach of our observational ability, an ironic example of proposing a theory based on the lack of evidence. The fossil record is, as Gould and Eldridge and many others have admitted, incongruous, jumpy, characterized by stasis and jumps and gaps. The testable proposition – at least Darwin’s “slight, successive modifications” version has been shown false.
For evolution to be true, the earth must be old enough to have allowed time for these sequential changes. This is a testable proposition.
Agreed. And the Earth is not nearly old enough. The universe is not nearly old enough. The entire age of the universe is but a rounding error against any realistic calculation of what would be required for the alleged changes to take place.
Evolution is a better model for a second reason. It seeks regularities. Regularity is the set of physical causes that includes uniform processes, chaos, complexity, stochastic events, and contingency. Regularity can include physical laws, mathematical expressions that predict relationships among phenomena. Regularity can include unpredictable phenomena, such as earthquakes, volcanoes, turbulence, and the single toss of dice.
There are a couple of quite obvious problems with holding up “regularity” as some sort of measure of what constitutes a “better” model. Regularity is certainly important to recognize when it exists. It is also quite important to recognize its limitations. Regularity might help us understand the slow deposition of sand in delta or the slow carving of a riverbed. But there are lots of irregular physical phenomena that are just as valid in explaining certain features of the physical world – things like floods and meteorite impacts and supernovae. More importantly, we know for a fact of one cause that does not simply follow physical regularity, namely, intelligent designing agents. So asserting that the model that insists on “regularity” is the better model commits (i) the practical mistake of ignoring a large swath of causal events that are known to exist, and (ii) the logical mistake of assuming as a precedent the very conclusion that one is trying to reach. ----- Finally, just a quick comment on the parting shots:
Alternatively, discuss whether a variant within a species can be shown to have more or less CSI than another variant. Perhaps a calculation of the CSI in Lenski’s bacteria before and after adaptation.
Why would anyone think that CSI can be calculated as though it is subject to a simple mathematical formula? CSI includes not only complexity, but specificity. The latter is not amenable to simple mathematical calculation. Rather, it deals with function, context, operational aspects, purpose, meaning. Yes, we can calculate the unfortunately-misnamed “Shannon information”; and, yes, that relates to complexity. But the specificity is also required. Anyone who does not understand this point cannot understand CSI, cannot understand design, and cannot mount an effective attack against ID because they will not know what they are talking about. Moreover, in the context of the current discussion, I trust the reader will recognize the rich irony of the evolutionary proponent, on the one hand, acknowledging that molecular evolution is required, that enough time must be available, that a sequential stepladder approach is necessary, but never once offering a detailed analysis of what would be required to get from, say, organism A to B; while on the other hand demanding that the skeptic provide a precise calculation of the difference between organism A and B. The irony of the complete lack of calculation-driven and analysis-driven detail on the evolutionary proponent's part is all the more rich, given the near universal acknowledgement by even staunch evolutionists that organisms appear designed. Truly the onus is on the evolutionist to provide some reasonable evidence contra this nearly self-evident observation, rather than just vague references to "change over time" and the like, coupled with demands that skeptics prove a negative. Eric Anderson
Petrushka should certainly be allowed to defend the response here on UD. The challenge was issued here, the response from Petrushka was posted here. The ability to defend the response should be allowed. Eric Anderson
147,490 kairosfocus
Sal @3:
ID is composed of two theories, Design theories and Intelligence theories.
Sal, I sure wish you would stop beating this drum. The idea that ID "merges" design theories and theories of intelligence is not helpful. Particularly not when the quote from Dembski that you provided a while back to support your assertion says precisely the opposite -- namely, that they can, and should, be kept carefully separate. Eric Anderson
F/N: As of a moment ago, neither my inbox nor spam box has anything from Petrushka, unless P is using a Nigerian scam type greeting. KF kairosfocus
FYI- petrushka is whining because he cannot defend his equivocating, lie-filled and meaningless drivel here. petrushka, you are welcome on my blog. If you can get your crap past me I will plead your case to the UD moderators. Joe
145,865 kairosfocus
143,158 for VJT on Tour . . . kairosfocus
It needs to be pointed out that Behe, one of the top three design school scientists, holds to universal common descent. kairosfocus
ID is anti a priori materialist, blind watchmaker molecules to man evolutionary narratives presented to the public as if they were demonstrated unassailable fact. For cause. kairosfocus
Living organisms are islands of functions and blind watchmaker processes cannot reach them. That said, petrushka is nothing but a grand equivocator who cannot grasp the fact that ID is not anti-evolution. Joe
P, you asked for islands of function. You have them: a key defining characteristic of the individual species or the like. KF kairosfocus
PS: . . . or should that be, needle. Also, the incidence of isolated protein forms in the space even between close species, is relevant. Kozulic on singletons:
Proteins and Genes, Singletons and Species Branko Kozuli? Gentius Ltd, Petra Kasandri?a 6, 23000 Zadar, Croatia Abstract Recent experimental data from proteomics and genomics are interpreted here in ways that challenge the predominant viewpoint in biology according to which the four evolutionary processes, including mutation, recombination, natural selection and genetic drift, are sufficient to explain the origination of species. The predominant viewpoint appears incompatible with the finding that the sequenced genome of each species contains hundreds, or even thousands, of unique genes - the genes that are not shared with any other species. These unique genes and proteins, singletons, define the very character of every species. Moreover, the distribution of protein families from the sequenced genomes indicates that the complexity of genomes grows in a manner different from that of self-organizing networks: the dominance of singletons leads to the conclusion that in living organisms a most unlikely phenomenon can be the most common one. In order to provide proper rationale for these conclusions related to the singletons, the paper first treats the frequency of functional proteins among random sequences, followed by a discussion on the protein structure space, and it ends by questioning the idea that protein domains represent conserved units of evolution.
A bit more:
One strategy for defusing the problem associated with the finding of functional proteins by random search through the enormous protein sequence space has been to arbitrarily reduce the size of that space. Because the space size is related to protein length (L) as 20 ^ L , where 20 denotes the number of different amino acids of which proteins are made, the number of unique protein sequences will rapidly decrease if one assumes that the number of different amino acids can be less than 20. The same is true if one takes small L values. Dryden et al. used this strategy to illustrate the feasibility of searching through the whole protein sequence space on Earth, estimating that the maximal number of different proteins that could have been formed on planet Earth in geological time was 4 x 10^ 43 [9]. In laboratory, researchers have designed functional proteins with fewer than 20 amino acids [10, 11], but in nature all living organisms studied thus far, from bacteria to man, use all 20 amino acids to build their proteins. Therefore, the conclusions based on the calculations that rely on fewer than 20 amino acids are irrelevant in biology. Concerning protein length, the reported median lengths of bacterial and eukaryotic proteins are 267 and 361 amino acids, respectively [12]. Furthermore, about 30% of proteins in eukaryotes have more than 500 amino acids, while about 7% of them have more than 1,000 amino acids [13]. The largest known protein, titin, is built of more than 30,000 amino acids [14]. Only such experimentally found values for L are meaningful for calculating the real size of the protein sequence space, which thus corresponds to a median figure of 10 ^ 347 (20 ^267 ) for bacterial, and 10^ 470 (20 ^361 ) for eukaryotic proteins . . . . one should bear in mind that in a 300 amino acid protein there are 5,700 (19 x 300) ways for exchanging one amino acid for another, and that each one of these 5,700 possibilities points to a unique direction in the fitness landscape [41]. A single amino acid substitution can trigger a switch from one protein fold to another, but prior to that one, multiple substitutions in the original sequence might be necessary . . . . as a matter of principle, how can one possibly talk about a separate or additional fitness effect due to a 3D structural change if the protein sequence determines its structure, and the structure determines function and the function determines fitness? My literature search for publications describing evolutionary modeling based on fitness effects of protein structures gave no results. And according to a paper published in 2008: “the precise determinants of the evolutionary fitness of protein structures remain unknown” [47] – 18 years since Lau and Dill proposed the „structure hypothesis“[15]. On the other hand, in a number of papers it was shown that all relationships in the protein structure space can be described in purely mathematical terms [18, 25-28], and a most recent study concludes that „these results do not depend on evolution, rather just on the physics of protein structures” [29]. If all relationships in the protein structure space can be described fully without the need to invoke evolutionary explanations, then such explanations should not be invoked at all (Ockham’s razor).
That's the real scope of challenge. And it bites:
When proteins of similar sequences are grouped into families, their distribution follows a power-law [65-72], prompting some authors to suggest that the protein sequence space can be viewed as a network similar to the World Wide Web, electrical power grid or collaboration network of movie actors, due to the similarity of respective distribution graphs. There are thus small numbers of families with thousands of member proteins having similar sequences, while, at the other extreme, there are thousands of families with just a few members. The most numerous are “families” with only one member; these lone proteins are usually called singletons. This regularity was evident already from the analysis of 20 genomes in 2001 [66], and 83 genomes in 2003 [69]. As more sequences were added to the databases more novel families were discovered, so that according to one estimate about 180,000 families were needed for complete coverage of the sequences in the Pfam database from 2008 [71]. Another study, published in the same year, identified 190,000 protein families with more than 5 members - and additionally about 600,000 singletons - in a set of 1.9 million distinct protein sequences [73] . . . . The frequency of functional proteins among random sequences is at most one in 10^ 20 (see above). The proteins of unrelated sequences are as different as the proteins of random sequences [22, 81, 82] - and singletons per definition are exactly such unrelated proteins. Thus, to enter the distribution graph as a newcomer (Fig. 2d), each new protein (singleton) must overcome the entry barrier of one against at least 10 ^20 . After the entry, singleton’s chance of becoming prominent, that is to grow into one of the largest protein families, is about one in 10^ 5 (Fig. 2d). Thus, it is much more difficult for a protein to become biologically functional than to become, in many variations, widespread: the entry barrier is at least fifteen orders of magnitude higher than the prominence barrier. This huge difference between the entry and prominence barriers is what makes the protein family distribution graph unique. In spite of this high entry barrier, in the sequenced genomes the protein newcomers (singletons) always represent the largest, most common, group: if it were otherwise, the distribution graph would break down. The mathematical models that incorporate data from all sequenced genomes in effect “spy” on nature [21]. With the help of one such model we have just uncovered something remarkable: in living organisms the most unlikely phenomenon can be the most common one. This feature clearly distinguishes the complexity of living organisms from the complexity of self-organizing networks . . . . Koonin and coworkers have developed several versions of their gene birth-death-and-innovation model (BDIM). The power-law distribution, however, could be reproduced only asymptotically, the family evolution time required billions of years when empirical gene duplication rates were brought in, the genes within a family needed to interact, and prodigious gene innovation rate was necessary for maintaining a high influx of singletons [83-87]. Horizontal gene transfer (HGT), rapid sequence divergence and ab initio gene creation were mentioned as the possible sources of singletons. In another attempt, Hughes and Liberles proposed that just gene duplication and different pseudogenisation rates between gene families were sufficient for emergence of the power-law distribution [88]. The authors ruled out horizontal gene transfer and ab initio gene creation as the processes that could form new genes, because these processes were rare in eukaryotes but the power-law distribution was observed also with eukaryotic families. The evident problem with this study, however, is in that pseudogenisation per definition leads to a loss of function: the resulting power-law distribution of non-functional protein families is entirely different from the power-law distribution of functional protein families [read that blind search in AA space] . . . . For the origin of unique genes one has to turn to divergence of the existing sequences beyond recognition, or to ab initio creation, where the ab initio creation can happen either from non-coding DNA sequences present already in the genome or by introduction of novel DNA sequences into the genome. Regardless of which one of these three scenarios, or their combination, we consider, necessarily we come into the wasteland of random sequences or we must start from that wasteland: facing the probability barrier of one against at least 10 20 cannot be avoided. The formation of each singleton requires surmounting this probability barrier. Without the incorporation of this probability, or perhaps another one that might be better supported by future experimental data, all models aiming to explain the observed protein family distribution will remain unrealistic.
This leads to a pivotal challenge:
Siew and Fischer succinctly described the issues at stake: “If proteins in different organisms have descended from common ancestral proteins by duplication and adaptive variation, why is that so many today show no similarity to each other?” And further: “Do these rapidly evolving ORFans correspond to nonessential proteins or to species determinants?” [103] . . . In 2008, Yeats et al. [73] found around 600,000 singletons in 527 species - 50 eukaryotes, 437 eubacteria and 39 archaea - corresponding to 1,139 singletons per species. No information about the number of singletons is available in the most recent summary of the data from over 1100 sequenced genomes encompassing nearly 10 million sequences [64]. In spite of the missing recent data on singletons, the results of the above calculations are sufficient for an unambiguous conclusion: each species possesses hundreds, or even thousands, of unique genes - the genes that are not shared with any other species. . . . . The presence of a large number of unique genes in each species represents a new biological reality. Moreover, the singletons as a group appear to be the most distinctive constituent of all individuals of one species, because that group of singletons is lacking in all individuals of all other species. The conclusion that the singletons are the determinants of biological phenomenon of species then follows logically. In System of Logic, John Stuart Mill outlined his Second Canon or Method of Difference [133]: “If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance in common save one, that one occurring only in the former; the circumstance in which alone the two instances differ, is the effect, or the cause, or an indispensible part of the cause, of the phenomenon.”
Hamming-distance isolated islands of function with a vengeance indeed. kairosfocus
kairosfocus, Axe is working with real proteins with over 80AAs. That doesn't count with evos. ;) Joe
F/N: Axe's empirical work indicates protein rarity in AA space (for which he suffered a little dose of being expelled . . . ) is of order 1 in 10^60 - 70+. That puts us in the ballpark of the one straw to 1,000 Ly cubical haystack that was outlined on above. KF kairosfocus
Richie didn't read the PDF by Kalinsky. Richie didn't read the Durston paper. Richie didn't read the Hazen paper. And Richie didn't read the Szostak paper I also referenced. As I said the moron just wants to make this personal. And he persoanally fails every time we ask for evidence for blind watchmaker evolution. And not only that the only reason to attack ID and CSI is because blind watchmaker evolution is a total failure. IOW Richie et al are admitting theirs is a failed position. Case closed. Joe
As predicted Richie Hughes choked on the references. And to prove Alan Fox is totally clueless he followed Zachriel's ignorant lead by thinking Keefe and Szostak refutes Durston on the rarity of proteins. Ik Keefe and Szostak used median sized proteins they wouldn't have had any success. They used 80AA proteins, ie very, very small proteins. And they are not indicative of the proteins in living organisms. Joe
PPS: The log reduced Chi metric is actually equivalent to the per aspect explanatory filter that RTH and co also despise. But, that reaction is unable to overturn the basic fact that, properly used, it works reliably. Default, mechanical necessity explains phenomena. High contingency of outcomes on similar starting conditions overturns that. Two empirically grounded alternatives: chance leading to statistical scatter, or intelligence acting by design. Default 2: chance. Overturned by FSCO/I as a tested sign of design. Testability, show FSCO/I coming about by blind chance and/or mechanical necessity. Tests, billions of successful cases, many failed attempts to show otherwise. Needle in haystack analysis (similar to that behind the 2nd law of thermodynamics pivoting on relative statistical weights of macroscopically distinct clusters of microstates) backs up the empirical findings. That is, the result is as we should expect with high reliability. kairosfocus
Joe: Info is quite a serious matter. I think RTH and co need to first clarify what info is and why it is measured as it is for t/comms purposes. Namely on info-carrying capacity typically in bits. Then, they need to ponder the difference between that and things like how we report computer file sizes in bits, but these bits at work are as a rule functionally specific. (E.g., I have done the exercise of looking at doc format Word files from the bit and ASCII code end. Looks like there is a lot of repetitive useless stuff. Clip just one of those at whim. Close off then try to re-open the file. Crash, corrupt file. Functional specificity. Such can also try the exercise of sending noisy info to an analogue monitor and watching the picture dissolve into snow. Text files can be corrupted to varying degree and it is easy to see how recoverability/function deteriorates into gibberish as things get worse. This is just one way to see how FSCO/I is real, and it gives context to Orgel and Wicken back to the '70's, who highlighted how functionally specific complex organisation and associated information were pivotal to understanding life based on cells by contrast with crystallographic order or the sort of random patterns of micro crystals in a bit of granite. But, as my mom so often said, a man convinced against his will is of the same opinion still. [Way back, they used to teach gems of wisdom in school for kids to memorise as sayings; this is an especially apt one in a situation where just a glance at the thread will show abundant evidence of rage- and hostility- driven blindness. For instance, it should be obvious that if you don't actually submit work you have no just cause for complaint if it is not received.]) It is in that context that they can begin to understand null state vs ground state vs functional state strings and how different degrees of constraint shift the avg info per symbol (Shannon's H) for AA sequences. Flat random across 20 possibilities gives 4.32 bits per symbol carrying capacity, but actual protein statistics . . . similar to flat random ASCII vs patterns of frequencies of English text . . . will shift proportions, so a stochastic pattern would give a null. Then, looking at families of proteins in living forms gives an empirical measure of functional info capacity, site by site. That is some aligned sites can vary considerably, others much less so. One aspect is responsiveness to water and effects on folding, a first step to function. The math follows, and gives a useful empirically grounded functionally specific info metric rooted in actual sequencing. Take this and blend in the relevant config spaces noting how proteins of relevant character typically are 250 - 300 to 1,000+ AAs long. The results quickly put one beyond the sort of threshold already outlined above on the toy example of every atom in our solar system searching through a 500 fair coins config space at the rate of a fresh observation every 10^-14 s. For 10^17 s. This leads to the 1 straw to a cubical haystack 1,000 light years on the side sample to pop ratio, as in searching for a needle in a haystack with strictly limited resources. So, one only has a right to expect to see the bulk, non-functional gibberish. And BTW, if such objectors had paid attention over the years, they would have noticed also, that by analogy of AutoCAD etc, we can see that organisation expressible in a nodes and arcs and components pattern (common in engineering, think the exploded view so commonly used in assembly) is reducible to a structured set of strings. Where the structure itself is effectively a code, an expression of language. Discussion on strings is WLOG. So, enough Math for purpose has long since been there; for instance cf. the summary here on. Yes, once one has understood basics of info th it is not rocket science, but no one said it was. Indeed, Dembski's 2005 metric can be reduced logarithmically and seen to be an info beyond a threshold metric: I - 500. (A point that seemed to have escaped May/Mathgrrl* some years back.) Put in some reasonable limits and the threshold is 500 bits. Blend in a dummy variable that sets to 1 when there is objective evidence of functional specificity. Voila: Chi_500 = Ip*S - 500, bits beyond the solar system threshold. (For observed cosmos as a whole, go for 1,000 bits) Yes, this requires some scientific empirical investigation to see why 500 is a good threshold for the sol system, to identify info content metrics and to evaluate whether S can be set or holds its 0 default. That is, this is a science starter, not a science stopper, and it is not a creedal declaration, but an invitation to testability. Effectively, the import of this is that, reliably, FSCO/I beyond 500 bits will set Chi_500 > 0, and that indicates design as most credible causal explanation. So, try to test this, and see if it is so. Billions of positive cases, many dozens of attempts to get counter-examples over the course of years [stuff like canals on mars in drawings of astronomers from 100 years ago, or an imaginary clock world that "evolves" more and more sophisticated clocks, etc etc etc] uniformly failing, often by letting in intelligently driven active information or target-guiding oracles in the back door. Inductively massively tested and reliable. Why then the controversy, rage and increasingly hostile personalities in response? because of implications: life forms starting from protein complexes and associated DNA, are on this criterion to be seen as designed. With ever so many examples in the living cell that pass the threshold. So, we are cutting across ideological commitments and doing so in a context that is scientifically rooted in light of a major field of science over the past 70 years: Information Theory. The rage and fury of the blast will pass, the hecklers will find themselves discredited, the hate sites will increasingly isolate themselves into irrelevance as fever swamps to be quarantined. The slanders will increasingly be obviously false accusations. The irrational behaviour driven by ideologically driven rage rooted in seeing evolutionary materialist scientism under threat will expose itself for what it is. The silly notion that one has to adopt evolutionary materialism in order to promote scientific, technological and economic progress will fall of its own weight once stated in bald terms and compared with history and current situations. And, a token of the problem is plain from the above: what could have led P to imagine that such a list of talking points would constitute an adequate answer to the challenge to actually reasonably substantiate the evolutionary materialist, blind watchmaker claim? And,if the case were such a slam dunk as to be as sure as gravity or the roundness of the earth, in actuality, exercises similar to what I called for should be a dome a dozen, and should have an irrefutable, solid character. Not hurling elephants, ignoring major aspects of the issue [OOL, Tree of Life branching and the need to address Gould's stasis and suddenness issues as well as the problem of homology vs the diverse gross anatomy and molecular trees etc], making vague phil of sci assertions on testability [both sides of a scientific issue will generally be testable], and so forth. What seems to be plain is that this is a case where imposition of Lewontin-Sagan a priori materialism makes a case seem much stronger than it is, but at the price of begging big questions. Hence Phil Johnson's point:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
P and those who cheered P on, will need to do some sober reflections. But, I am not sure they are even paying duly careful attention. KF *PS: Not the Calculus professor. kairosfocus
Barb @ 59 I see your point, but I think here the talk is about a model being testable and falsifiable, i.e. can be proven either right or wrong (true or false). Some models might not allow that kind of test where the result could point either way. For example, at least at this point, the multiverse theories don't appear to be testable and much less falsifiable. The ID model proposes that CSI is only the product of intelligence. If someone can test such proposition and find a case where CSI is produced by unintelligent means, then the ID proposition could be considered false. Dionisio
The reply to KF lost me at the very beginning: "Evolution is the better model because it can be right or wrong." Wait, what? If it's wrong, why would you--or any intelligent person--believe it? What would be the point of believing in something that is false? Linus Pauling stated that science is a search for the truth. Truth is that which conforms to reality. If you state that your theory is either right or wrong, then it's not the truth. Barb
Poor little Richie is kicking and screaming for math that he can't understand. Start here and read the pdf Then see Kirk K. Durston, David K. Y. Chiu, David L. Abel, Jack T. Trevors, “Measuring the functional sequence complexity of proteins,” Theoretical Biology and Medical Modelling, Vol. 4:47 (2007):
[N]either RSC [Random Sequence Complexity] nor OSC [Ordered Sequence Complexity], or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life. FSC [Functional Sequence Complexity] includes the dimension of functionality. Szostak argued that neither Shannon’s original measure of uncertainty nor the measure of algorithmic complexity are sufficient. Shannon's classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that “different molecular structures may be functionally equivalent.” For this reason, Szostak suggested that a new measure of information—functional information—is required.
Here is a formal way of measuring functional information: Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak, "Functional information and the emergence of biocomplexity," Proceedings of the National Academy of Sciences, USA, Vol. 104:8574–8581 (May 15, 2007). See also: Jack W. Szostak, “Molecular messages,” Nature, Vol. 423:689 (June 12, 2003). What Richie and friends 't seem to realize is that biological information came from Crick with his Central Dogma. It relates to sequence specificity. And if unguided evolution could account for it Richie and pals wouldn't need to worry about CSI nor any math. The only reason to even try to argue against CSI is because he knows materialism cannot explain it. He doesn't understand that attacking ID will never be positive evidence for materialism and evolutionism. But anyway it's a sure bet that Richie won't be happy with my references because his agenda is to make this personal- it definitely ain't about evidence because he has no idea how to assess any evidence. Joe
Hi Vincent, thanks for your response. As I said in my response to Richie those questions prove that ID is not a dead-end. Dembski said so also. Intelligent Design is the study of patterns in nature that are best explained as the result of intelligence. — William A. Dembski Why do we want to not only detect but also study it? To answer those questions. For example with ID the accepted paradigm we would have institutions and universities working on these questions. And Universities would be pumping out students to help find the answers. For me the more interesting question is where is the software. The DNA is akin to the 1s & 0s, ie the electric pulses that represent the software, but it ain't the software. There is immaterial information in all living organisms that makes it all go. Yes having a plan is nice but we cannot get ahead of ourselves. Nor can we forget the evidence for design from cosmology, eg "The Privileged Planet". BTW the age of the earth depends on how it was formed. The 4.5x billion year mark requires the assumption that the proto-earth was entirely molten, ie 20,000Kelvin- no crystals from the accretion material is allowed to have survived. Joe
F/N 3: A sad point:
TSZ: petrushka on March 11, 2014 at 7:05 pm said: This is not complicated. KF promised to start a thread in the name of anyone who responds to his challenge. This is the second time I’ve offered an essay dealing with his questions and criteria. Let him keep his word.
P, of course notified me of neither such attempt. Which is what I indicated 1 1/2 years ago -- kindly notify me, as Joe did recently and I guest-posted his post. Those who are desirous and serious know well enough how to find my contact through the always linked note from the handle that appears for every comment I have ever made at UD. As is obvious from the above, I found out about the recent post at TSZ -- which I do not regularly visit -- by VJT's cross-post here. I have noticed no notification, and so far all I have is a say-so on an earlier claimed answer. P, in simple terms, you know or should know how to find my email. I have not found in my email box an attempt or notification of same. In this second case VJT has posted, which renders my own posting a moot issue. Unfortunately, the posted answer does not adequately address the matter, as shown. This is similar to my experience six months ago when I put together a composite from a remark of Dr Liddle (to effect, nothing on OOL) and one by Jerad [IIRC] on macro evo, which was disappointing. And earlier I had taken up the Wiki articles ans The Talk Origins 29 evidences, by way of saying an answer needs to be better than that. In short, it sure looks like I have (again) been misrepresented in a way that would cast unjustified doubt on me. I think you need to set the record straight, P. (And I would post your try no 1 if you notify me on it.) KF kairosfocus
Sal -
...the hypothesis has its challenges in terms of believability because of the absence of seeing the Designer.
I have to say, I don't quite agree there Sal. I think this is at the fault of the observer, rather than the theory. They are those chained in the cave. We need not know anything about the ancient Egyptians to rightfully conclude the Sphinx and pyramids were designed. We don't know exactly the designing mind behind the antikythera mechanism, however, we know it to be an object of design. Yes, I agree there is a wide-spread issue of individuals refusing to look at the theory because the designer cannot be presented, but is that attributable to the theory itself, or the a priori worldview of the observers? These same people quite often accept ideas, and theories that cannot be presented, so long as it isn't the idea of God. TSErik
F/N 2: The strawman arguments continue. Objectors to design inferences on the world of life full well should know that ever since Thaxton et al in the early 1980's, it has been recognised that an inference to design as process on the world of life does not entail an inference to any particular designer, whether within or beyond the observed cosmos. Indeed, I have myself repeatedly pointed to the work of Venter et al and raised the point that a molecular nanotech lab some generations beyond our state of the art could be a sufficient cause for what we see in the world of life on our planet. Which so far is the only place we actually observe cell based life. Those who persistently distort and caricature the design inference are therefore willfully continuing a misrepresentation. Going beyond, there is a whole other field of design inferences, pioneered by the likes of Sir Fred Hoyle, that redneck Bible-thumping fundy ignoramus [Nobel-equivalent prize holding astrophysicist and lifelong agnostic], who pointed to the fine tuning evidence in its early features, a pattern of evidence that has now grown by leaps and bounds. That evidence as I summarised in brief earlier in this thread, points to a cosmos designed to host C-chemistry, aqueous medium cell based life from its basic physics on up. Couple that to the logical implications of a credibly contingent cosmos, and we see ourselves looking -- even through a multiverse speculation -- at a necessary being cause with a designing mind as the explanation to beat. In that context, it would be no surprise to find life that fits the implications of the cosmos' design. And, it would be independent of whether or not there is universal common descent. In that context, the evident design of life is not even a critical issue for design thinkers. It is just that that is where the evidence points. KF kairosfocus
F/N: I see P, end of the posted answer:
I count 323 words. I would be happy to post it in response to Kariosfocus’ challenge, but unfortunately I am not allowed to post.
P full well knows that I gave my word that I would host an answer, under my account. As was indicated from the outset. So, a serious answer would have been posted. As it is VJT noted the comment, clipped and posted. I happened to notice his title line, and took time to respond. The thread above suffices to show that the attempt is weak, as should be obvious save to those looking with the darwinist eye of faith. As to RTH's attempt to change subject to discussing the designer, the gap in that should be fairly obvious from what we are trying to do in origins science. Namely, to seek to understand the past which we did not see and for which there is no generally accepted record. To do that, one has to play the detective examining circumstantial evidence tracing from that past. And to explain cause adequately, the vera causa test is needed: we must show factors adequate to the effects. In this case, it is obvious that designed objects OFTEN -- as opposed to always -- exhibit features that mark them as distinct from mechanical necessity and chance. One of these is FSCO/I. And with objects showing such, we are entitled to infer design on best explanation. Process. Just as we can infer to design in a suspicious fire without knowing whodunit. Buit od=f course if one is explicitly or implicitly committed ideologically to no designer being possible, one will easily reject a design inference. Not because of the inductive case but because of an a priori. And it seems ever more clear that such controlling ideological a prioris are at work. Time for fresh thought. KF kairosfocus
VJT: Thanks. I took a look at title and OP, and setting ad hominems aside, at best it pivots on the sort of ideological misunderstandings like those Marxists used to have. Back to basics in a nutshell. On the inductive side there are billions of cases of FSCO/I around us beyond 500 bits, and as libraries full of books, Internets full of web pages, industries full of PC's phones and cars etc jointly testify, in each case, FSCO/I is a reliable index of design; as RTH knows or full well could easily confirm to the point where this is glorified common sense. The needle in haystack analysis shows why, with 10^57 solar system atoms as observers, each observing a 500 coin tray flip every 10^-14 s, for 10^17 s, we see that the sample size to config space size for 500 bits is as one straw to a cubical haystack 1,000 light years on the side, comparable to our galaxy at its central bulge. Superpose on our neighbourhood, blindfold, reach in and pick a one-straw size sample at random. With all but absolute certainty, straw. That is, as RTH full well knows from the proverb but is irritated by, a blind and small sample is overwhelmingly likely to be haystack, not needle.In this case, it means FSCO/I is an inductively and analytically strong sign of design. Where, obviously, he and ilk -- after years of failed attempted counter examples, is unable to show blind chance and mechanical necessity producing FSCO/I. However, locked up in an ideology that demands otherwise, he is desperate to dismiss what the induction tells us. Sadly, he goes beyond such and ends up enabling those who have tried to expose the names of uninvolved family including minor children, and to publish street address of same. And he refuses to face the difference between dealing with heckling, personal attack and slander, which he and ilk would so desperately love to tag as "censorship," and undue suppression of publication. A sad picture. Surely, he can do better, and the founder-owner of TSZ can do better. KF kairosfocus
Hi Joe, Re your comments above on questions 1 to 4 by Richard Hughes, I would like to make a distinction between an attribution of design and a design hypothesis. The latter is required to make testable predictions which scientists can investigate; whereas with the former, all we need to show (with a high degree of plausibility) is that the outcome in question exhibits a high degree of specified complexity. Hence we can know that Stonehenge was designed without knowing the who/what/when/where/why. Scientists, however, don't like twiddling their thumbs: they need work to do. Faced with a choice, most of them would rather check out an implausible but fruitful hypothesis than endorse a more plausible hypothesis that gave them no leads to follow up. That's why naturalistic theories of origins get the (undeserved) attention they do: there are lots of competing origin hypotheses, ranging over multiple pathways. Scientists can also give free rein to their imaginations, and it is not hard to think of more and more outlandish proposals. The only constraints that these hypotheses have to satisfy are that they have to stick to the sequence of events we have observed, as well as the available time (four billion years). It is very easy to poke fun at these speculations, but the only way to effectively counter them is with a rival hypothesis of our own. There are a few other reasons why a detailed design hypothesis is required, too. First, it's not just Stonehenge we are talking about here. It's a whole planetful of organisms of various kinds, which are often competing against each other. When we claim that all of these were designed, the question naturally arises: what for? Second, we know the timescale involved: billions of years. Designers typically don't take that long to do a job, so that's a prima facie argument against design that we have to address up front. Yes, I know it's horribly unfair, as the Darwinists have yet to show that their own hypothetical account of origins is capable of doing the job within the time available, but let's face it: it's an obvious criticism of the design hypothesis, and we're not gong to make any headway in gaining adherents until we address the "time question" up-front. Third, there's the objection from apparent mal-design: laryngeal nerves, prostate glands and suchlike. Once again, the difficulty posed by examples such as these is far over-shadowed by the problem of explaining the origin of the simplest living cell as a result of unguided processes, but once again, it's a very obvious criticism of the design hypothesis, and we can't run away from it. Fourth, there's the moral objection: the long process of life's unfolding over the last four billion years has taken the lives of many creatures, and one wonders at the motives of any designer who would employ such a costly process to achieve their goals. Again, this is an emotional argument, but humans (like it or not) are emotional creatures. So there we have it. As I see it, we're never going to make much headway until we address these questions. Human scientists feel a pressing need to ask them, and as I see it, we need at least the outline of an answer to these questions before we can get a hearing for Intelligent Design in scientific circles. Hence my call for multiple Design hypotheses. My tentative hypothesis is almost certainly wrong in a big way, but at least it's an attempted answer, which tries to address the popular objections I alluded to above. vjtorley
Hi kairosfocus, First of all, I'd just like to apologize for putting up this post without consulting you first. That was rather thoughtless of me. Petrushka's reply was made in a comment inside this post by Richard Hughes over at TSZ: http://theskepticalzone.com/wp/?p=4228 . Second, I would pretty much agree with your criticisms in comment #8 above. There has to be a demonstration of the possibility of body plan evolution (and for that matter, OOL) before any evolutionary model can be taken seriously. All too often, evolutionists have lowered the far by claiming that all they have to do is poke a hole in any argument which purports to demonstrate the impossibility of evolution. But that's not the same as demonstrating that the proposal you're putting forward is a workable one. Third, I think your remarks on the time available are crucial to the discussion. Evolution has to not only work, but also satisfy the time constraints posed by the four-billion year history of life - and all the indications so far suggest that it would take many orders of magnitude more time to get from organic chemicals to the first cell, and from the cell to a complex animal, than the time available in the fossil record. Fourth, I would endorse your remark on FSCO/I bits. All that needs to be shown is that we are talking about more than 500 bits, which places the (specified) outcome far beyond the reach of chance and/or necessity. A precise calculation of the FSCO/I in a living thing is not required. Darwinists, to make their theory credible, have to show that the FSCO/I in a living thing is less than that. vjtorley
And wrt the "bottom-up" vs "top-down" part, guess who said the following:
It seems to me that what the “code skeptics” are saying is that if we can account for the origin of the genetic code in terms of either bottom-up processes (e.g. unknown chemical principles that make the code a necessity), or bottom-up constraints (i.e. a kind of selection process that occurred early in the evolution of life, and that favored the code we have now), then we can dispense with the code metaphor. The ultimate explanation for the code has nothing to do with choice or agency; it is ultimately the product of necessity. In responding to the “code skeptics,” we need to keep in mind that they are bound by their own methodology to explain the origin of the genetic code in non-teleological, causal terms. They need to explain how things happened in the way that they suppose. Thus if a code-skeptic were to argue that living things have the code they do because it is one which accurately and efficiently translates information in a way that withstands the impact of noise, then he/she is illicitly substituting a teleological explanation for an efficient causal one. We need to ask the skeptic: how did Nature arrive at such an ideal code as the one we find in living things today? By contrast, a “top-down” explanation of life goes beyond such reductionistic accounts. On a top-down account, it makes perfect sense to say that the genetic code has the properties it has because they help it to withstand the impact of noise while accurately and efficiently translating information. The “because” here is a teleological one. A teleological explanation like this ties in perfectly well with intelligent agency: normally the question we ask an agent when they do something is: “Why did you do it that way?” The question of how the agent did it is of secondary importance, and it may be the case that if the agent is a very intelligent one, we might not even understand his/her “How” explanation. But we would still want to know “Why?” And in the case of the genetic code, we have an answer to that question. We currently lack even a plausible natural process which could have generated the genetic code. On the other hand, we know that intelligent agents can generate codes. The default hypothesis should therefore be that the code we find in living things is the product of an Intelligent Agent.
Nope, it wasn't me although I definitely agree with that. click here That is another reason I am pretty sure we (VJT and myself) are just talking past each other wrt TSZ "challenge". Joe
My first reaction after reading Petrushka's essay is that he has essentially thrown in the towel. He is admitting that there is nothing to support his beliefs. Otherwise he would be providing evidence. Welcome to the ID camp, Petrushka, where we base our ideas on scientific evidence. jerry
Related to being in Africa and rhinos. Apparently ground rhino horn powder are a passion for many in Asia, especially China. There is supposedly someone who has accumulated large amounts of this rhino horn powder and is paying poachers just to kill the rhinos so his stash get more valuable. He doesn't care if he gets the rhino horn, only that the supply is made smaller. About one rhino a day is killed in Kruger by poachers. Also one guide told us they shoot poachers on the spot if they are killing an animal, usually elephants and rhinos. They realize that these poachers are destroying their future livelihood. jerry
niwrad @42:
If “seeing” by eyes is the conditio sine qua non of knowledge, then half of modern science is not knowledge and is not believable. Have you ever seen, say, an electron, a black hole, a quark, a multiverse…?
Excellent point. And taking this further, even the things that we do see may be illusory. Which takes us to the Cartesian conjecture, "I think, therefore I am". Mapou
And just to further support my claims: ID the future
Critics of intelligent design theory often throw this question out thinking to highlight a weakness in ID. Richards shows that the theory's inability to identify the designer is not a weakness, but a strength. ID does not identify the designer is because ID limits its claims to those which can be established by empirical evidence. As CSC Senior Fellow Dr. Michael Behe puts it: " [A] scientific argument for design in biology does not reach that far. Thus while I argue for design, the question of the identity of the designer is left open."
Joe
There is only one issue in evolution and that is the origin of new alleles and their corresponding proteins. In OOL there is the issue of where did all the complexity in the cell come from which can be thought of as a similar issue. Each had to use a slow building process. OOL is the more challenging one because of the complicated protein/RNA entities that are used for transcription and translation. It seems just the the origination of the ribosome complex is off the charts but the Darwinist believe it just magically appeared in a short time 3.6 billion years ago. But to get to the point of the OP. Molecular evolution must be true. But the problem for the Darwinist is that every new allele for a protein must leave a genetic trail and will be available in the genome. Also the process that generates the new alleles must have millions of failures also. There should be examples in the various genomes of DNA sequence that did not make it to a new allele. I do not know if I am being clear but if a new allele appears then there should be lots of evidence in related species of this sequence failing to make it to the functional allele. For example, Nick Matzke made a big point that horse and rhinos were once descended from a common ancestor. If that is true, then there should lots of evidence in either species of failed genomic sequences that came to fruition in the other species. So the key to proving the Darwinist proposition is to find the forensic trail in the various genomes. This will get easier as literally tens of thousands of genomes for a species are mapped and compared. The non coding sequences will be the most important in such a project because it is here that evidence of new or failed sequences will appear. Then it will be the job to look for these failed sequences in other species to show that they did not make it to a functional allele. The whole basis of Darwinian evolution is that these alleles develop slowly but they do not appear out of nowhere. There must be evidence of the failures some place in related species in order for Darwinian evolution to be true. It is hot and steamy from Victoria Falls in Zimbabwe in Central Africa and I have contracted a Jungle fever which is why I even dared waste time and looked at this. jerry
-
Intelligent design begins with a seemingly innocuous question: Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? Wm. Dembski
Yes, they can. Most, if not all, anti-IDists always try to force any theory of intelligent design to say something about the designer and the process involved BEFORE it can be considered as scientific. This is strange because in every use-able form of design detection in which there isn’t any direct observation or designer input, it works the other way, i.e. first we determine design (or not) and then we determine the process and/ or designer. IOW any and all of our knowledge about the process and/ or designer comes from first detecting and then understanding the design. IOW reality dictates the the only possible way to make any determination about the designer(s) or the specific process(es) used, in the absence of direct observation or designer input, is by studying the design in question. If anyone doubts that fact then all you have to do is show me a scenario in which the designer(s) or the process(es) were determined without designer input, direct observation or by studying the design in question. If you can't than shut up and leave the design detection to those who know what they are doing. This is a virtue of design-centric venues. It allows us to neatly separate whether something is designed from how it was produced and/ or who produced it (when, where, why):
“Once specified complexity tells us that something is designed, there is nothing to stop us from inquiring into its production. A design inference therefore does not avoid the problem of how a designing intelligence might have produced an object. It simply makes it a separate question.” Wm. Dembski- pg 112 of No Free Lunch
Stonehenge- design determined; further research to establish how, by whom, why and when. Nasca Plain, Peru- design determined; further research to establish how, by whom, why and when. Puma Punku- design determined; further research to establish how, by whom, why and when. Any artifact (archeology/ anthropology)- design determined; further research to establish how, by whom, why and when- that is unless we have direct observation and/ or designer input. Fire investigation- if arson is determined (ie design); further research to establish how, by whom, why and when- that is unless we have direct observation and/ or designer input. An artifact does not stop being an artifact just because we do not know who, what, when, where, why and how. But it would be stupid to dismiss the object as being an artifact just because no one was up to the task of demonstrating a method of production and/ or the designing agent. And even if we did determine a process by which the object in question may have been produced it does not follow that it will be the process used. As for the people who have some "God phobia": Guillermo Gonzalez tells AP that “Darwinism does not mandate followers to adopt atheism; just as intelligent design doesn't require a belief in God.” (As a comparison no need to look any further than abiogenesis and evolutionism. Evolutionitwits make those separate questions even though life’s origin bears directly on its subsequent diversity. And just because it is a separate question does not hinder anyone from trying to answer either or both. Forget about a process except for the vague “random mutations, random genetic drift, random recombination culled by natural selection”. And as for a way to test that premise “forgetaboutit”.) For more information please read the following: Who Designed the Designer? (only that which had a beginning requires a cause) Mechanisms in Context Intellegent Design is about the DESIGN not the designer(s). The design exists in the physical world and as such is open to scientific investigation. All that said we have made some progress. By going over the evidence we infer that our place in the cosmos was designed for (scientific) discovery. We have also figured out that targeted searches are very powerful design mechanisms when given a resource-rich configuration space. Intelligent Design is the study of patterns in nature that are best explained as the result of intelligence. -- William A. Dembski Joe
scordova #3
So I accept ID is true, that the Designer is God, that is a superior theory, but I also acknowledge the hypothesis has its challenges in terms of believability because of the absence of seeing the Designer.
If "seeing" by eyes is the conditio sine qua non of knowledge, then half of modern science is not knowledge and is not believable. Have you ever seen, say, an electron, a black hole, a quark, a multiverse...? niwrad
gpuccio- OK as long as we agree that those questions are irrelevant to the detection and study of design (in nature). My point is we may never be able to answer those first 4 questions. The designer(s) is (are) way above our pay grade and so are the methods used. As I said we still don't know the who, how and why of Stonehenge and that is something that is within our capabilities. We may be able to figure a way it could have been manufactured but that doesn't mean it will be the way it was. The more important questions to answer after we have determined design is "how does it work" and "can we fix it if it malfunctions". The people who need to know the who and how ain't interested in science as they require proof, which isn't part of science. If we knew the who and how we wouldn't have a design inference. Design would be a given. Joe
Joe: It's just a question of words. You use "ID" to mean the design inference. I agree that questions about the nature of the designer are not necessary for the inference itself. I use "ID" to mean the ID paradigm and the specific ID theory for biological information. As a general paradigm or theory, all possible entailments and further questions are part of the theory itself. I think we essentially agree. gpuccio
gpuccio:
First of all, I don’t agree with Joe that they are “irrelevant” to ID. I can agree that the first two, and maybe the third, are not necessary for the design inference. The other three are part of the design inference itself, and however, all of them are pertinent issues that should be part of any design paradigm for biological information.
Wm Dembski in "No Free Lunch" (p111-12) says that they are separate questions- that is separate from ID. And that is because in order to answer those questions we must first determine design is present and then study it and any other relevant evidence. In the absence of direct observation or designer input, the only possible way to make any scientific determination about the who, how, why, when and where, is by studying the design and all relevant evidence. That is how it is done in archaeology, forensic science and SETI. And again Dembski states that ID does not prevent people from attempting to answer those questions. And the fact that those questions exist proves that ID is NOT a scientific dead-end. Joe
SC: I have responded to Ewert, here. KF kairosfocus
F/N: The pivotal for the PPPS issue is, does a mouse or a trilobite or an acorn worm or a flowering plant, or a sponge or a fungus on a tree stump or a sea urchin, or a eukaryote etc, at origin, exhibit an increment in dFSCI and/or FSCO/I in order to come into existence, this being a threshold of 500 - 1,000 bits? Blatantly, yes. That means, debates on copies (and copies imply FSCO/I-rich copying mechanism) are irrelevant, the issue is that origin of novel FSCO/I requires explanation, including origin of copying mechanism. For instance at OOL, that means a von Neumann Self Replicator (vNSR) has to be explained on physics and chemistry of a pond, a volcano ocean vent, a gas giant moon or a comet etc, Tour's challenge no 1 -- the molecular nanocar guy. At OOBPs that means the increment to the new info, so there is room for the overlap between human and kangaroo genomes, but also we have to account for the distinctive verbal language using ability of humans including vocal tract, auditory systems and brain processing, as well as how all of this becomes fixed in a population. I assure you 500 bits of code is peanuts to do any significant cybernetic entity. KF kairosfocus
129,935. VJT, in two weeks or so, your page has put on coming on 100 k hits. New phenomenon, and I don't doubt it helped trigger the attempt we are discussing in this thread. KF kairosfocus
The roosters here in Montserrat say, good day ahead. KF kairosfocus
F/N: P's PPS is a question-begging turnabout that dodges the fundamental origins/historical science challenge. Namely, we did not see the remote past of origins, nor do we have generally accepted record. Therefore we are forced to observe traces in the present and explore characteristic causal factors that have demonstrated capability to generate materially similar effects -- the vera causa principle -- and then we can reconstruct the past from that as a model explanation per IBCE. We confront FSCO/I in life and particularly dFSCI, digital, functionally specific coded complex information. Per vera causa, the ONLY empirically warranted factor capable of creating such is design. If you deny that, kindly provide a clear case in point that does not let design in the back door by failing to lock the door properly. We have every inductive logic, epistemic right to then explain dFSCI in life on like causes like. To overturn this -- as has been pointed out thousands of times so the twistabout is willful -- you will need to show vera causa on the behalf of blind chance and mechanical necessity, creating dFSCI at or beyond 500 bits, i.e. solve the blind search needle in haystack problem for a space of 3.27 * 10^150 configs, without injecting active info, setting up an oracle, using foresight etc. And if you want to suggest the cosmos builds in search algors in physics, that steps you up to search for search. A search being a subset of a set, the S4S config space is the power set of the first space, of cardinality 2 ^ [3.27*10^150]. That is, the problem exponentiates. KF kairosfocus
UB: I am looking forward to seeing your site too! :) gpuccio
Dionisio: Beautiful clear blue sky and sunny day here too (Italy). Have a good day. gpuccio
F/N: It strikes me that trying to suss out the motives, capabilities, strategies, styles etc of designers, design teams or whatever involved is an abductive, cumulative inference exercise. As such, I would bring to bear both biological and cosmological evidence and reasoning, starting from our evidently fine tuned cosmos in which rooted in the physics, H, He O and C are the 1st 4 elements with N close. As in big clue. The first gives us stars, the second the periodic table, the third, and fourth, water and organic chemistry, the fifth for our galaxy, amino acids and the protein family. That to my mind points to cosmological design towards life on planets in Galactic Habitable Zones, in circumstellar habitable zones, implicating powerful, sophisticated, skilled design long before life enters the picture as such. So, even were it shown that "blind" mechanisms can work with the ingredients we are already in the context of cosmological programming. Mix in contingency of the observed cosmos and its material constituents and multiply by evident design and we are looking at contingent vs necessary being root cause issues . . . and yes such are epistemology, logic and phil, consequential on crossing the border. With a necessary being at root even through multiverse speculations. Such a necessary being is of independent existence (no dependence on on/off enabling causal factors), and a serious candidate is either impossible (mutually contradictory attributes) or actual. To make simple: can 2 not exist, did it have a beginning, etc? 2 + 3 = 5? That points to eternal mind with power to conceive and call a material cosmos into being as a very serious candidate root of being. And nope, flying spaghetti monsters or pink unicorns etc are not serious candidates. In this context, biological world evidence fits into that picture, first set the cosmological stage then populate it with actors and let the play begin. Of course, all of this has long since been put on the table, but we are dealing with determined objectors who too often give the impression of simply pushing strings of favourite talking points rather than seriously grappling with evidence and issues in dialogue. KF kairosfocus
Upright BiPed @ 23
Dr Torely, instead of...
Just a minor observation, unrelated to the subject being discussed: are the letters 'l' and 'e' in their correct position within the referred name? isn't it Dr Torley ? It's around 10:45am here in this part of Europe. Beautiful clear blue sky sunny day. Y'all have a good day too. Dionisio
UB: Go, go, go, mon! Let's see that site! Give us the links! Make sure to link UD and put in glossary and FAQs as well as weak argument rebuttals. Do you plan a where to go from here onward links, refs and notes page? A vid clips section? News & views blog? Forum? Etc etc etc? Tell us, mon. KF kairosfocus
PS: Impedance matching at interfaces is a significant systems building challenge, in ever so many ways. kairosfocus
Joe & VJT: Can I suggest that both have a point? We do often start at a global, encapsulating block diagram level in designs of complex systems. But, that is because we have also worked out the device physics (and here, chemistry), the fit, organisation and function of first level circuits, networks, components, sub-assemblies etc, and have further worked out protocols and issues of interfacing, impedance matching, coupling, co-ordinating etc. So there are three tiers, each with significant challenges that have to be simultaneously solved and solved for the lifespan of the entity. BTW, that is also where embryology and its analogues become important in bio systems: start from one cell and self assemble. A major engineering challenge if ever I have seen one. Brings us full circle to the von Neumann Kinetic self replicator architecture and its layers of irreducibly complex aspects. Including, codes and language to store the blueprints. Which means, we have to sort out the OOL problem, too. Tour has a serious point, as a molecules up man. So did Paley in his Ch 2 that usually does not come up in the dismissive remarks I have seen. KF kairosfocus
F/N: Just to pick another point: P: >> For evolution to be true, the earth must be old enough to have allowed time for these sequential changes. This is a testable proposition. >> H'mm, wasn't it a pop genetics based result that to fix two co-ordinated muts in a mammal line with several years generational span and a reasonable scale of pop, would take 200+ mn years? Wasn't it also a finding presented by Sternberg that whale evo faced similar challenges? Isn't it the case that ape to man is supposedly 6 - 10 MY, and that on 2% of 3 bn bases we are talking 60 mn bases? (As in up to 120 mn bits.) So, is a timeline that puts our galaxy at say 12 BY, and our solar system at least a 2nd gen, pop I . . . yes, they got it back ways around . . . star with high metallicity, dated 4.6 BY on the H-R pattern even near the relevant blind watchmaker mechanism ballpark, with realistic mut rates? In short, it looks like time available is a challenge for huimans, whales and of course the Cambrian revo. That last would need, dozens of times, 10 - 100+ mn bits of fresh genetic info. And we have not touched the islands of function challenge on this, that would be in a cumulative micro evo context. For islands, I simply point to GP's point on isolated protein clusters in AA sequence space. Remember, this is common even between species deemed closely related. So, even giving for argument the idea of a vast connected continent of functional forms that can be incrementally traversed, the time to credibly do so is an open question. And of course, the vera causa directly observed case in point of such OOBP cumulative descent with mods is: _________ , published by: ________ , and with recognition of authors: _______ . KF kairosfocus
The code is a set of context specific regularities (i.e. relationships) established in a local system. Because of this they cannot be measured, only demonstrated. They are established by the proteins, aminoacyl tRNA synthetases, which I refer to as the protocols in the semiotic argument. Here is a short clip from my upcomming website:
The onset of recorded information No matter which theory one follows regarding the Origin of Life, there is one thing that all observers can be certain of. Prior to the organization of the first living cell on earth, unique physical conditions had to arise to make that organization possible. These conditions are brought about by the presence of two sets of physical objects operating in a very special system. In order to organize the cell, a set of representations and a set of protocols must arise to bridge the (necessary) discontinuity between the medium of genetic information and its resulting effects. One set must encode the information and the other set must establish what the result of that encoding will be. These are the physical necessities of the system. But because the organization of the system must also preserve the discontinuity, a group of relationships are established that otherwise wouldn't exist, producing effects which are not derivable from the material make-up of the system. These unique conditions are the inexorable mandate of translation (which were proposed in theory and confirmed by experiment). This system is something that the living cell shares with every other instance of translated information ever known to exist. It’s the first irreducibly complex organic system on earth, and from it, all other organic systems follow. Moreover, it is specifically not the product of Darwinian evolution - it's the origin of life's capacity to change and adapt over time. It marks the rise of the genome, and the starting point of heredity. Not only must these representations and protocols arise within an inanimate environment, but the details of their construction must be simultaneously encoded in the very information that they make possible. Without these things, life on earth would simply not exist.
Upright BiPed
GP at 22 I agree completely. Upright BiPed
...Intelligent Design theory needs to say more about the Designer in order to generate a scientifically testable. I make the following assumptions about the Designer (Whom I believe to be God, although I would not claim to be able to prove this belief scientifically):
I trust I am not the only one who sees the general problem inherent in this statement. Dr Torely, instead of promoting the idea that ID must make claims about the designer (claims it cannot support) can you please do ID the simple favor of just saying that you personally wish to make those claims? That way we can keep the legitamate scientific claims of ID separated from the spurious claims that some feel compelled to make, as well as those that serve the larger interest of ID critics. Upright BiPed
Joe: I perfectly agree with your #15. Absolutely true. Let me repeat your main, important statement: The DNA is NOT the code- the code is the rule for converting the DNA into a polypeptide. As a piece of information, I would remind here where the rule is really "written" in the biological context. The rule is mainly in the 20 Aminoacyl tRNA synthetases, 20 very big and complex proteins, universal proteins, which couple each codon in tRNAs with the correct aminoacid. 20 proteins, each of them a very big and complex molecule (length range about 300 - 1100 AAs). That's a lot of dFSCI, just at the beginning of the process that reads the information in DNA! gpuccio
VJT: Just my view about Richard Hughes' "questions". 1)Who is / was the designer? 2)What was their motivation(s)? 3)What was their method of fabrication? 4)How many design interventions were there? 5)What specifically was designed? 6)What specifically wasn’t designed? First of all, I don't agree with Joe that they are "irrelevant" to ID. I can agree that the first two, and maybe the third, are not necessary for the design inference. The other three are part of the design inference itself, and however, all of them are pertinent issues that should be part of any design paradigm for biological information. The answer to the last three is very easy because, as I said, they are part of the design inference itself. So: 4)How many design interventions were there? Answer: We can infer design for each context in natural history where new dFSCI (CSI) appears in biological beings. So, OOL, the Cambrian explosion, and the appearance of each new protein superfamily are all good contexts for a design inference, and therefore for the legitimate hypothesis of a specific design intervention. 5)What specifically was designed? Any object that exhibits specific, new dFSCI for the first time. 6) What specifically wasn’t designed? The correct answer is: for any object that does not exhibit dFSCI (or CSI) we have no reason to make a design inference. There is no scientific reason to believe that those objects are designed. So, let's go to the difficult ones: 1)Who is / was the designer? There is no final empirically supported answer to that, at least for now. We can certainly make hypotheses. The only thing that any hypothesis compatible with ID should include is the idea of a conscious intelligent being (or many of them) as the designer. Each hypothesis should be empirically tested, as far as that is possible. 2)What was their motivation(s)? Again, we don't know at present. The same as for point 1. But here, some more specific hypotheses can be done, IMO, if not for the motivation for the whole existence of biological beings (which remains more of a philosophical issue, at present), certainly for many specific patterns in biological design. For example, I have many times suggested that the main motivation apparently behind the proliferation of complexity in biological beings seems to be the desire/necessity to explore new possible functions, which cannot be implemented with the existing complexity. It is also true, as often suggested by "design enemies", that at least part of the observable biological design seems to be "destructive", or at least apparently cruel, for our concepts. All those facts should be considered in making hypotheses about motivations. But I believe that this issue remains at present vastly philosophical. 3)What was their method of fabrication? This is the most interesting among the "big questions". Here, more specific scientific approaches are already, at least in part, in the range of our discussion. I would split that question in two parts: 3a) How does the designer interact with biological matter to design new objects? 3b) What specific method of function implementation has been used? My answers: 3a is obviously related to our hypotheses about the identity and nature of the designer. However, if we exclude the only reasonable scenario for a physical designer (aliens), other scenarios are likely to hypothesize non physical designers. That is also my personal position, as often stated here. In that case, the question of how a non physical consciousness can interact with matter is certainly pertinent. It is not a purely philosophical question. It has a definite scientific aspect. For all those that believe (like me) that there is no possible explanation of the empirical fact of the existence subjective consciousness in terms of objective arrangements of matter, the problem is similar to the old problem of how can we (human conscious beings) interact with our brain and body (matter). That problem is difficult, but it has been addressed may times in scientific form. Quantum scenarios are at present, IMO, the best hypothesis. My point is: what works to explain the consciousness/matter interaction in human beings is the most natural choice to try to explain the consciousness/matter interaction in biological design, whoever the designer is. And, finally: 3b. This is the most empirically approachable part in the "big questions". The most obvious part seems to be: top down or bottom up? I don't believe we have any answer at present, but the answer is there, in the biological record of natural history: fossils, genomes, proteomes, anything that can add to our understanding of how and when new designed objects appeared. Indeed, the top down and bottom up theories have definite entailments in what can be observed in natural history. However, I believe that design, by definition, always has at least a partial top down component. Design starts as a representation and a desire in the consciousness of the designer. So, that is the "top" of the process, and it is at the beginning of the process itself. But a designer can certainly use bottom up approaches as part of his general top down strategy. We have many examples of that in human engineering. So, the two things are not mutually exclusive. In natural history, any demonstration of intelligent search using random variation as part of itself would be a good example of a bottom up strategy. So, RV + intelligent selection is a (partly) bottom up strategy. On the contrary, direct implementation of the information form conscious understanding (IOWs, directed mutation) would be an example of top down strategy. The facts will answer that, as always happens in science. gpuccio
Mosse Dr @19, I like your arguing style even if you're preaching in the wilderness. Your pearls of wisdom will be trampled on because you're arguing with pigs. Mapou
Petrushka makes an interesting restatement of old canards. Most particularly, "my theory is scientific, yours is not, therefore mine is right." If one looks at the first half of Petrushka's thesis, one sees a good quality list of "testables" in neo-Darwinism. What Petrushka fails to see is that the testability of the theory does not depend on the availability of another theory. Petrushka's points of testability, again: (Note that when Petrushka speaks of evolution he is clearly talking about neo-Darwinian evolution, not merely change over time or even universal common descent.) > For evolution to be true, molecular evolution must be possible. Cool, where is the science that establishes that molecular evolution is possible, that the "islands of function" are not "be separated by gaps greater ..." I have not found this evidence. > For evolution to be true, the fossil record must reflect sequential change. There is clearly some, even quite a lot, of sequential change demonstrated in the fossil record. But why do palaeontologists keep talking about "punctuated equilibrium"? Isn't "punctuated equilibrium" basically the equivalent to change that exceeds what should be in the time frame allowed? While there are hypothesises that attempt to explain this phenomenon, but they seem painfully weak to me. Most importantly, the phenomenon is not a comfortable fit with the theory. > For evolution to be true, the earth must be old enough to have allowed time for these sequential changes. Yes! and Sir Fredrick Hoyle argued strongly that there isn't nearly enough available time to create the universal ultra-conserved genes of life. He attempted to solve the problem by proposing that life came from elsewhere. He even recognized that the universe is not old enough to provide explanation, so he argued against the big bang. So where is the strong scientific case to counter his concerns? > Evolution has entailments. ... It is either right or wrong ... Yes! Yes! Yes! Let evolution stand or fall on its own merit or lack thereof. No other theory needs to be presented. I wholeheartedly agree with Petrushka's entailments. They are rather good. If one could convince me on the evidence that: molecular evolution is possible (as described), the fossil record reflects sequential change, and the earth is old enough then I would join the neo-Darwinian evolution camp. But please, convince me on the evidence that these challenges have adequate answers, not on the canard that the other theory isn't as scientifically elegant. Moose Dr
Anything petruska argues will be interesting. However getting off the ground is tough in his case. How an abstract representation would emerge at least at this point in time seems like a stone wall. Joe, I think Patrick's hatred for ID is getting the best of him in this case. Someone capable of programming at his level should have no trouble following the line of evidence unless he simply despises ID so much that ID's destruction is more relevant. junkdnathewhite
From my perspective, it appears that Petrushko has laid the outlines of why evolution is a better theory, without providing any evidentiary support. In fact, the testable points he lays out would appear to all have been falsified at some level. It's hard to tell, without having his full or complete thesis. But it certainly looks like he lays out a list of tests that evolution has failed (saltation, fossil record stasis, Haldane's Dilemma), then posits Materialism under a different guise as Regularity. drc466
Error. For evolution to be true the fossil record etc etc. nope. the fossil record must be shown first to represent the deposition events and these separated by heaps of time. anyways the fossil record only shows fossils. it doesn't show evolution in process. Its just speculation the fossils evolbved from each other. There is no biological evidence and their sequence would just be a coincidence even if it was all true. There is a error of presumption about fossil data points being fossil evidence of types of something evolving or changing over time. the fossil record can't be used for evidence of evolution including ideas about predictions. its a logical fallacy. Robert Byers
From Wikipedia:
A code is a rule for converting a piece of information (for example, a letter, word, phrase, or gesture) into another form or representation (one sign into another sign), not necessarily of the same type.
So with the genetic code we have DNA, specific codons of DNA, that then get transcibed into mRNA, which then get processed and translated into the unrelated polypeptide. So you have one type of molecule representing another different type of molecule. The DNA is NOT the code- the code is the rule for converting the DNA into a polypeptide. Two totally unrelated molecules- meaning one does not make up the other. Do the DNA codons become the amino acids? No, the DNA codons REPRESENT the amino acids. There isn't any physio-chemical reaction in which the codons transform into amino acids. The code is rael and completely arbitrary. Joe
And over on the tsz thread VJT linked to we have none other than Patrick "MathGrrl" May denying tat the genetic code is actually a code. The genetic code fits the definition of an actual code. I guess real codes are an issue for materialism and therefor must be denied except when attributed to us Joe
VJT posted:
(c) I deny the possibility of any agent’s designing an organism from the top down. In order for a thing – be it a person, an animal, a plant or a mineral – to be a genuine entity in its own right (and not just a virtual reality imitation of an entity), it has to be fully specified, at all levels, from the bottom to the top. This is because the top-level of an entity does not, and cannot, determine all of the details at the bottom. I’ve written more about this here: http://www.angelfire.com/linux.....eser2.html . What this means is that the Designer cannot achieve His ends merely by willing the results; He has to actually do some engineering, at the molecular level;
Umm designing something from the top-down means first conceptualizing the whole and then making it so. Computers and cars are top-down and all the details at the bottom are there. That is part of the process. You get to the bottom details by working your way down from the top. But anyway it seems that you are using the term in a manner that is not consistent with how designers use it. Joe
As for Richie Hughes' input:
Who is / was the designer? What was their motivation(s)? What was their method of fabrication? How many design interventions were there? What specifically was designed? What specifically wasn’t designed?
The first 4 are irrelevant to ID. As Dembski et al., say ID is about the study of design in nature. And to answer those questions design is first detected and then studied. Heck we don't know the who, the motivation nor the how of Stonehenge and that structure is within our capabilities. So although VJT answered them it doesn't mean they are part of ID. ID doesn't stop anyone from answering them, it is just that they are separate from ID. This has been explained many times to Richie. And it is very telling that he refuses to understand it. Joe
F/N: The first PS by Petrushka seems to be a bit of elephant-hurling. What is P's thesis, and specifically what evidence from those journals substantiates it in respect of OOL and OOBPs . . . let's abbreviate origin of body plans, it will come up fairly frequently . . . in such a way that on inference to best current explanation [IBCE] per observed, empirical facts, blind watchmaker mechanisms make designoid a better answer than designed. Reckon with the billions of cases of FSCO/I around us including text in this thread and the machines we are viewing such on, etc etc, and our uniform observation as to source of same backed up by the needle in haystack challenge. KF PS: The count on your article is now: 125,503. This is a new phenomenon at UD. kairosfocus
Hi, Petrushka Long time, no see. I missed you. You know well my position. Each protein superfamily, or if you prefer basic protein domain, is a saltation, and suggests a design inference. The dFSCI in protein families has been calculated by Durston, and is well beyond the probabilistic resources of our planet. You ask if "different living things have different quantities of CSI". Yes, I think so. But CSI is best calculated for single objects, like proteins. In that sense, a species which has more protein genes has more CSI (at that level). Obviously, there is a lot of CSI that we cannot yet calculate, for example all the regulatory complexity of the procedures, because we still do not understand well where that information is. This, just to begin. Old themes, but always interesting. gpuccio
PS: I also indicated that I would wait five comments before making a comment (which I distinguish from a full post in my own right.) Notice, the above is no 8. Though, of course, I expected to host such a response myself. I would have given an intro, then laid it out, similar to other guest posts I have hosted. Which BTW remains on offer separate from this. If you have something significant to say, talk with me about doing a guest-post. kairosfocus
VJT: thanks for letting me see this. Was the response posted somewhere at TSZ or the like, or was it sent in to you as an email or the like? (If not, I think you or someone else with posting privileges there should post it there with links going both ways, so there can be a real parallel.) I am too busy to do a point by point right off, but the above is an outline summary not a full answer, it would be interesting to see the full response. I do know Petrushka claimed to have been working on a response; one hopes something more substantial will be brought to bear. I will note that the outset point tries to imply or lead the reader to infer that the design inference on signs such as FSCO/I is untestable. But that gambit fails, and should have been long since known to be a failure. Any scheme that seeks to generate say a sense making text string in excess of 72 ASCII characters by blind chance and/or mechanical necessity without intelligent direction is a test. So far, for instance, random text generation programs have been set up and have attained 24 characters in a coherent English phrase, about 1 in 10^50, a factor of 10^100 too short. This is directly relevant to random program generation and random writing of functional genetic info. Where, it should be well understood that the culling based on differential reproductive success part is a SUBTRACTION of info process, so the schemes critically depend on chance variation to incrementally write the megabits of DNA info required to generate body plans. I note, too, that once we move from chemicals in a pond or the like to a functioning cell, the need for complex clusters of parts in proper match and arrangement to work will immediately be a sharp constraint on workable configs from the space of possible configs in the pond or the like. (One can do a crude toy count by dicing up into cells of appropriate scale and in effect doing an undo diffusion estimation. That should be a warning already.) This sets the stage -- yes it does -- for body plan origin challenges. Dozens of times over, complex organ systems and networks have to be assembled into coherent wholes. Again, this imposes a stringent constraint on possible configs, and leads straight to the search for shores of function in vast seas of non-function challenge. We are dealing with info on order of 10's of Mbits here, even before we deal with the implied pop genetics challenges to fix so much info incrementally. And as for co-opting other structures, we run into the need to have just right matched parts available and able to couple and config, where for many structures such as wings, loss of function before gain of function is a big challenge. Thus, there is a strong presumption in favour of isolation of islands of function. One that can only be seriously overcome by showing empirical cases, i.e. we are looking at the need for showing cause, i.e. the vera causa constraint. It would be interesting to see an actual demonstration of body plan level evolution on credible blind chance and mechanical necessity, with something we observe. Failing such, the inference behind darwinism and the various related schools of thought will be unavoidably highly speculative and ideological. Thus contentious, and those who question will have serious questions when they see things like this from Lewontin:
the problem is to get them [people] to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations [--> ideologically deeply loaded and laced with toxic contempt], and to accept a social and intellectual apparatus, Science, as the only begetter of truth [--> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]. . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [--> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [--> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. [NYRB, Jan 1997, cf fuller excerpt and notes here to see that this is not a case of citation out of context that is misleading on the issues being highlighted.]
KF kairosfocus
The point of CSI is to see if it is present or not. Its presence is a sign of design. There isn't any need for an exact number and using Crick and Shannon we are only accounting for the physical information wrt biology anyway. Meaning we won't account for all of the biological information. Joe
If CSI or any of its variants are to be cited, please discuss whether different living things have different quantities of CSI. For example, does a human have more CSI than a mouse? Than an insect? Than an onion? Please show your calculation.
Very good point, and I've tried to argue we have to be careful with CSI! The ID community can't even agree on simple CSI calculations: Paradox in calculating CSI for 2000 coins Winston made some very good counter arguments, and in my view, none in the ID community has universal agreement! Sometimes I may seem like a turncoat when I agree with Darwinists, but it's my policy, if I think they've said something true, even if damaging to the ID enterprise, it is better for the ID community to acknowledge a difficulty and try to find remedies. I have tried to find remedies to smaller cases of Design like homochirality (which are essentially the statistics of fair coins). More complex designs await another attempt at unassailable theories. We've take a few good first shots, but it doesn't mean the ID community can't improve on clarity, and that means being able to agree on measuring bits of CSI or, better yet, imho, just stating rote probability and dumping information theory for the time being. We were able to vanquish Nick Matzke without having to invoke all the complexities and conceptual abstractions of CSI. That mode of unassailability is what I seek for the ID enterprise. scordova
vjt:
I also agree with Petrushka that Intelligent Design theory needs to say more about the Designer in order to generate a scientifically testable.
That's crazy talk. The way to the designer is through the design- that is in the absence of direct observation or designer input. ID predicts that when intelligent agencies act within nature tey tend to leave traces of their actions behind. Both CSI and irreducible complexity are such traces. Joe
LoL! As if that was a response to KF. What petrushka posts in no way supports blind watchmaker evolution- he/ she is still an equivocator. Not only that he/ she is incorrect. I like how "regularity" has been redefined to include everything. That's just hilarious. That we have to be abl;e to know the capabilities of the designer is a joke because we know the capabilities by what they designed. We cannot test if humans of thousands of years ago were capable of building Stonehenge. The reference to peer-review is a big lie as there isn't anything in peer-review that supports blind physical processes producing molecular machinery. And Newton's four rules of scientific investigation make the explanatory filter relevant- thanks for proving that you don't understand science. Also it is up to YOU to show that natural history is suffiecient- again your difficulty with science is showing. As for CSI well that follows from Cricks Central Dogma and biological information = sequence specificity. And again you wouldn't have to worry about CSI if your position actually had some evidence to support it. One of the claims of ID is natural history cannot account for biological information. Joe
Petrushka and I actually share similar views of what constitutes science (sometimes to the chagrin of my ID associates). Darwinian or other forms of evolution have testable mechanisms, but they fail both empirically and theoretically. The biosphere is evolving downward in coordinated complexity. But it is at least directly testable. ID is composed of two theories, Design theories and Intelligence theories. Design theory merely identifies objects that conform to our notions of something being designed, all that can be formally tested is that something isn't the product of law and chance (a process that tends to evolve toward more uncertainty). That is empirical, that is scientific, but it says nothing of origins or history. It can be used however to critique evolutionary theories which claim to create the appearance of a violation of chance and law. So in the sense D-theories (Design theories) are clearly scientific, but answer different questions than evolutionism. Now if we combine D-theories with theories of intelligence, we get ID and then things get complicated. At least in principle we have a model that will at least work, but then we have an absent designer. The problem is intelligent agencies don't have to behave with regularity, they can choose not to show up in our lab and field observations. If we had even one observation of God witnessed by all people in the world, we could reasonably hypothesis he made life. The problem with the Stonehenge example is that we can say it passes the EF and thus conforms to a design according to Design theory (D-theory). We can also believe it was intelligently designed because we know humans exist and can make such designs, and thus we can reasonably assert ID for Stonehenge. The problem with the design of life is that we have not had one universal observation of God (or some other capable designer), whereas we have had universal observations of other humans. Imho, the better model is at least one that will work in principle. A mindless evolutionary model is one that cannot work in principle, but that is not to say ID is not without it's challenges. I accept ID, and maybe one day, all of us will meet the Designer of life. Not only from a personal level does Pascal's wager work, but even from a technological level. If ID is false, what is there to lose? Nothing. If ID is true, there might be much to gain because then things we might view a junk might be discovered to be wonderful engineering. So in view of the uncertainties on both side, ID is a superior venture. So I accept ID is true, that the Designer is God, that is a superior theory, but I also acknowledge the hypothesis has its challenges in terms of believability because of the absence of seeing the Designer. A theory might be true, but truth doesn't always equate to believability, and that is the problem with invisible and unseen designers. scordova
Petrushka - parsley ? Dionisio
Hi everyone: I'm putting up this comment, not as a critique of Petrushka's model, but purely in order to sketch the outlines of one Intelligent Design counter-model that might be developed. If others want to put up rival models of their own, then I'd invite them to do so. I fully agree with Petrushka that in order for evolution to be true, molecular evolution must be possible. However, I would submit that the evidence suggests that large-scale molecular evolution isn't possible. On the level of protein evolution, evidence amassed by Dr. Douglas Axe and Dr. Ann Gauger points to the fact that the "islands of function" are separated by gaps which are much greater than Nature is capable of traversing, in the time available. (For evidence to support this assertion, the curious reader may consult the relevant articles over at the Biologic Institute Website, at http://www.biologicinstitute.org/ .) So that's my Number One reason for rejecting unguided evolution. I also agree with Petrushka that Intelligent Design theory needs to say more about the Designer in order to generate a scientifically testable. I make the following assumptions about the Designer (Whom I believe to be God, although I would not claim to be able to prove this belief scientifically): (a) I assume that the Designer's motivation was to make a universe fit for living things, and especially conscious, intelligent beings, and to make it in a way that intelligent beings could discover His existence. I summarize the evidence for the fine-tuning argument here: https://uncommondescent.com/intelligent-design/is-god-a-good-theory-a-response-to-sean-carroll-part-two/ . See also Dr. Robin Collins' new paper, "The Fine-Tuning for Discoverability" at http://home.messiah.edu/~rcollins/Fine-tuning/Greer-Heard%20Forum%20paper%20draft%20for%20posting.pdf , which I blogged about here: https://uncommondescent.com/intelligent-design/an-excellent-new-paper-by-robin-collins-on-fine-tuning/ ; (b) while I assume no built-in limit to the Designer's power, I deny that the Designer is capable of generating complexity from simplicity, any more than He is capable of making a square circle, for reasons which I have elaborated here: https://uncommondescent.com/intelligent-design/an-exchange-with-an-id-skeptic/ and https://uncommondescent.com/intelligent-design/what-kind-of-universe-cant-god-make-a-response-to-dr-james-f-mcgrath/ . At the very least, the universe must have been front-loaded in order to generate the specified complexity (as described by Paul Davies) that we find in living things. I would also refer readers to the paper, "Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information" by Professor William A. Dembski and Dr. Robert J. Marks II, at http://www.evoinfo.org/papers/ConsInfo_NoN.pdf ; (c) I deny the possibility of any agent's designing an organism from the top down. In order for a thing - be it a person, an animal, a plant or a mineral - to be a genuine entity in its own right (and not just a virtual reality imitation of an entity), it has to be fully specified, at all levels, from the bottom to the top. This is because the top-level of an entity does not, and cannot, determine all of the details at the bottom. I've written more about this here: http://www.angelfire.com/linux/vjtorley/feser2.html . What this means is that the Designer cannot achieve His ends merely by willing the results; He has to actually do some engineering, at the molecular level; (d) I assume that the Designer works as economically as possible, and with a minimum of effort. Dr. Rob Sheldon's paper, "The Front-Loading Fiction" at http://web.archive.org/web/20090715062610/http://procrustes.blogtownhall.com/2009/07/01/the_front-loading_fiction.thtml puts paid to the notion, still popular with some theistic evolutionists, that it would have been easier to design living things by writing a program that would generate them all. As Dr. Sheldon shows, there isn't a program that could do that in our cosmos, where space and time are quantized - and building a universe with continuous space and time would require infinitely more detail on the Designer's part. It's therefore more economical to assume that the Designer manipulates the cosmos when He needs to; (e) on the other hand, creating each species de novo would be like reinventing the wheel. I therefore assume that the Designer intervenes in the biological world by modifying the genes and proteins of existing organisms. Hence I accept common descent. Now to some more specific questions, in response to a challenge put up over at the Skeptical Zone by Richard Hughes (see http://theskepticalzone.com/wp/?p=4228 ): Who is / was the designer? As I've said, I believe the Designer to be God, although I can't prove that scientifically. What was their motivation(s)? To make a universe fit for living things, and especially conscious, intelligent beings, and to make it in a way that intelligent beings could discover His existence. What was their method of fabrication? Divine fiat, beginning with the laws and constants of Nature, and subsequently, at the dawn of life and at various times during the history of life. A few cosmic events may have also been achieved through direct intervention (e.g. formation of the solar system, or the Earth-moon system, or the collision of the comet that killed the dinosaurs with planet Earth). How many design interventions were there? At least 10 trillion, or 10^13. [Update: 10^12 is probably a better estimate, as the average life-time of a species is 5 million years, and the current proliferation of species goes back a little over 500 million years - hence the proportion of species that have ever lived which are still alive today is probably closer to 1% than 0.1%, as I assumed in (2).] Justification: (1) Most of the species that have lived on Earth have arisen since the beginning of the Cambrian period, 542 million years ago. (2) The proportion of species that have ever lived which are still alive today is about 0.1%, or 1 in 1,000. (3) The number of species living today is about 10 million (some estimates go as high as 50 million). (4) Each species, according to Dr. Branko Kozulic’s online paper, “Proteins and Genes, Singletons and Species,” has about 1,000 singleton proteins (and a similar number of genes) which are chemically unrelated to any other proteins (or genes) and which we can safely assumed were designed. 1,000 x 10,000,000 x 1,000 = 10 trillion. Front-loading all these proteins would have been infeasible, as Dr. Robert Sheldon shows in his 2009 article, “The Front-loading Fiction” at http://web.archive.org/web/20090715062610/http://procrustes.blogtownhall.com/2009/07/01/the_front-loading_fiction.thtml . Yes, I have done the math. I realize that 10^13 interventions over 500 million-odd years means 20,000 per year, or about 60 per day (most of them, I assume, in places like the Amazon, which abound in species). [Update: If the true number of interventions is 10^12 rather than 10^13, as I argued above, that would still mean an average of 6 interventions per day, or roughly 2,000 a year. That may sound like a lot, but it's just a corollary of the statement that 2 new species, each with 1,000 distinct singleton proteins and genes, arise somewhere around the world every year. My guess would be that these are engineered pretty much simultaneously for any given species, and there may even by some degree of synchronization between the engineering of proteins and genes in different species; but I could be very wrong here.] Incidentally, I don’t necessarily think the tempo of evolution is uniform: probably speciation takes place in waves, so the rate may fluctuate. Also, I’m not sure how many individuals get new proteins implanted in them by the Creator when a new species (as defined by Kozulic) originates. I’ve assumed it’s 1, but if it’s 1,000, then you’d have to multiply my 10^13 figure by 10^3, which gives you 10^16. What specifically was designed? Proteins. RNA. DNA. Molecular machines. The first living cell. The eukaryotic cell. The different body types for complex animals. The different cell types in each plant, fungus and complex animal. The human body. All these systems were designed incrementally, for two reasons: (i) the design process had to occur in sync with the Creator’s terra-forming of planet Earth over the last 4 billion years, to make it fit for life and especially, complex life-forms like us; (ii) incremental design would have ensured maximal stability, minimizing the need for any further Divine intervention to prop up systems in order to prevent them toppling over. An incremental design process also means that new organs & organelles were designed by modifying pre-existing biological systems – which is why human embryos have tails, and why systems like the giraffe’s laryngeal nerve look awkward from an engineering viewpoint (although they actually do quite a good job – see http://www.icr.org/article/recurrent-laryngeal-nerve-not-evidence/ ). What specifically wasn’t designed? Junk DNA (yes, I’m happy to acknowledge there is some, though nowhere near as much as evolutionists assume). vjtorley

Leave a Reply