Thursday, February 11, 2010




Beneficent Animus?

If there's anything that would seem to require a modicum or more of disrespect in the list of authoritative systemics, it's Game Theory. So here's my latest contribution to that free floating need.

Response to a post at http://scienceblogs.com/cortex/2010/02/too_many_fastballs.php#comments,
self-explanatory as to its purpose::

"The irony here is that game theory, which treats life's strategies as gaming within a set of theoretically presumptive rules, isn't even predictably accurate when compared with our most entertaining varieties of actual games where rules are made to fit. And yet here you seem to be arguing that if strategies were changed, the teams that change them would win more often. Ridiculous. These theories are wrong precisely because they don't account for the innumerable counter strategies that each team can "choose" from within the rules of the particular game.
Then apply this to the game of life where we don't yet know some of the most basic rules, and thus some of the most basic strategies and counter strategies so far devised by biological forms. Game theory which presumes to be operational within those rules can never be more predictively accurate than the accuracy of those presumptions.
(Which I might add, don't even presume there's a significant switch at some point between our short term and long term predictive apparatus.)
Posted by: royniles | February 11, 2010 3:27 PM"

Hope that helps.

Maybe this will help as well, as far as counter strategies are concerned:

"The essence of any effective strategy may be in the options available to reveal or conceal its purposes or intentions. The default position of any strategic move is its openness to view - it's revelatory nature. Our first strategic options of choice may have been in the learned (and later instinctive) concealment of that move, in whole or in part, and before, or during, or even after it's done. Without having such options available, or making use of such, you may perhaps have a choice of optional goals, but have access to extremely limited set of options for obtaining them. Evolution may have taught all surviving strategists that openness in competition is virtually never as effective as strategies with an element of surprise or at least concealment. (Even a shark will conceal as much of its approach and intentions therewith as effectively possible.)"
(From a work in progress.)

10-28-2013:  Actually that last bit was quite silly, as I missed seeing the point that all gaming strategies are by their nature deceptive.   No-one ever tells the other side or sides what their strategies are unless they will be seen as so unbeatable that the game is destined to be won before it starts.

Saturday, February 06, 2010


An Illustration of Science's Neglect of Purpose as an Explanatory Factor with Regard to Almost Anything


Excerpted from: http://bigthink.com/ideas/18091


Concerning an interview with David Albert. a professor of philosophy at Columbia University, whose research is mostly concerned with issues of the foundations of physics. Titles heading the interview were: Where Philosophy Meets Science and The Profound Violence of Time


It would of course help to understand the following commentary if the entire transcript of the interview with David Albert were to be read first, but there were just too many pages for me to copy here. In any case, I've added the relevant parts to which my comments were directed:


ROY NILES on February 5, 2010, 7:54 PM

Extremely interesting until we get to this part:
“Note that the set of events depicted by the movie being shown in reverse is just as much in accord with everything we believe about the laws governing collisions between billiard balls as is the movie being shown in the correct direction. That is, if you were shown a movie like this and asked to guess — just based on your familiarity with the laws of physics, just based on your familiarity with how billiard balls behave when they collide — if you were shown a film like this and asked to guess whether it was being shown forward or in reverse, you wouldn’t be able to tell. Physicists express this by saying that the laws governing collisions between billiard balls are symmetric under time reversal, okay? And what that means more concretely is — a law is said to be symmetric under time reversal if it’s the case that for any process which is in accord with that law, the same process going in reverse — that is, the same process as it would appear in a film going backwards — is also in accord with that law. So we say that the laws governing collisions between pairs of billiard balls are time-reversal symmetric. Good.”


Me: But this is not true in the sense that we actually would be able to tell. Because the balls originally lost some momentum – one before hitting the other, and then the second losing momentum until running out of room or energy. In a reverse of the filming, the balls would both visibly or measurably gain momentum.
So whatever law is governing collisions between pairs of billiard balls, it doesn’t seem to be the one of time-reversal symmetry.
I can’t help but think that somehow I have to be wrong about this, because I can’t imagine how, otherwise, everyone else in the know here would seem to have to be.


ROY NILES on February 6, 2010, 1:10 PM

But let me comment further relative to this part:
“Once again, it appears as if although the theory does an extremely good job of predicting the motions of elementary particles and so on and so forth, there’s got to be something wrong with it, okay, because we have — although we have very good, clear quantitative experience in the laboratory which bears out these fully time-reversal symmetric laws, at some point there’s got to be something wrong with them, because the world that we live in manifestly not even close to being time-reversal symmetric.”


Me: Because perhaps the world we live in has to contend with purposive behaviors of those forms of energetic activity we’ve designated as living. It’s the reversal of purposive behaviors that can’t be seen or envisioned as a symmetrical process.
I’d go further in proposing that nature’s laws are purposive, life-giving being within that purpose or not.
If so, time-reversal symmetry breaks down accordingly.
And if purposive, we don’t know why, not knowing why there had or have to be laws to begin with (assuming they had a beginning and weren’t always here). But we should know in any case that we can’t reverse, with any form of symmetry, the purposes to which those laws are put.


ROY NILES on February 6, 2010, 3:00 PM

And try reversing the film of these billiard shots and observe the time reversal symmetry.
http://www.dailymotion.com/video/xep51_turkish-semih-sayginertrick-shot-sh


Add a Comment

Addendum: More will be said about all this later, but for now, consider this: Laws have a use, so one could argue that usefulness is essential to their purpose. To reverse "time" would be to destroy that useful purpose - that sequential order in nature that these laws seem universally devised to regulate.
And so if what exists at any point in time results from a conversion of diverse forces, purposive or no, any symmetry expected to be found with a reversal of that conversion just wouldn't be there. The web of causation that would need to reverse itself so obediently is in the end as vast as the universe.

Thursday, January 28, 2010


On the Dubiousness of Purpose

I got myself involved in an on line discussion of morality the other day, a subject I'd sooner have avoided, except for the prospect of being as fatuously remarkable as the next guy on that subject. And of dealing directly, as it turned out, with a self-described public intellectual. And so was I dubiously honored, with some of the commentary/evidence to be posted here as a reminder to avoid such temptation in the future. (Names will be altered to preserve my innocence.)

And so awkwardly I begin:
"M**, it occurs to me that what you've done here is fail to present your definition of morality in terms of its evolutionary purpose, instead defining it in terms of a long term goal with which our metaphorical evolution has always had a problem in abstracting from its perceptions of the immediate natures of its needs.
Rules of behavior are of course not in themselves the goals they're meant or best expected to attain. Instead such rules in the metaphorical eyes of our evolutionary apparatus were meant to serve a more immediate purpose - which some natural selective force over time could well have fashioned to fit the goal of human welfare.
Except it seems that time enough has not yet passed to witness that achievement.
Leaving us with the questions as to which strategies and tactics we're prone to use for short term goals might fit such long term purpose as well. Getting us back to a consideration of whether a focus on the nature of the purposes that fuel our expectations might make the answers easier to find.
And no, I'm not referring to the possibility of divine purpose but to purposive expectations endemic to the mechanisms of all living and choice making entities."

But then the big man replied: "Evolution, of course, doesn't have a purpose. But more to the point, to me evolution enters into the picture only early on, in endowing us (and probably other primates) with an innate sense of right/wrong and justice (as seen in the behavior of bonobos, for instance). After that, it's our ability to reflect on things that really gets ethics off the ground."

So then I say:
"M**: - I refer to evolution as purposive, and in particular with respect to the "purposive expectations endemic to the mechanisms of all living and choice making entities.'"
Then you say flat out that "evolution doesn't have a purpose" - but add that nevertheless and early on, it endows us (and probably other primates) with an innate sense of right/wrong and justice.
Which would seem to require some facility on their part for choice based on purposive expectations and the like. (Or would it not?)
Evolution then viewed from your perspective as purposive, serving what we have come to call a purpose, but unable to in any way, even in the guise of life itself, to see that purpose coming.
Talk about a category mistake, you seem to have come up with a whopper."

So then this reply from M:
"I have no idea what you are talking about (italics mine). Evolution does not have a purpose because purposes are things that are characteristic of conscious beings - so that's out unless you subscribe to intelligent design. Evolution "endowed" us with a moral instinct simply because natural selection apparently favored such instinct in a limited form in certain species of social primates. So?"

So I say:
"So natural selection favors certain outcomes but in retrospect did so to no purpose. Ridiculous.
No I don't subscribe to intelligent design by some entity separate from life itself, nor do I subscribe to the Neo-Darwinist position which is even more magical.
I had thought it would be clear to you that with its sometimes slow and plodding trial and error ways, life has managed to engineer its own designs. But clearly I've had you wrong."

And then I add: "To recapitulate what I've proposed here as to an alignment between evolution and purpose, I had referred to evolution as purposive, in particular with respect to the 'purposive expectations endemic to the mechanisms of all living and choice making entities.' Later stating that 'with its sometimes slow and plodding trial and error ways, life has managed to engineer its own designs.' And as an aside, nothing in that view requires the assumption of either teleology or teleonomy. The predictions made by organisms are always to some extent inaccurate, but not unwitting (as teleonomy would require). They are intentional and therefor purposeful. They work in the end because they are consequential."

M** then replies to me as follows:
"As for purpose in evolution, as I said, unless one believes in ID it's nonsense. Come to think of it, even if one *does* believe in ID it's nonsense.
I have absolutely no idea what the phrase "purposive expectations endemic to the mechanisms of all living and choice making entities" could possibly mean."

And I conclude this exchange with:
M**, if you were really interested in knowing what purposive expectations and the like might mean, you could simply google the phrases.

Adding to all in general: "-- of course moral behavior is relative to the particular circumstances.. It's based on what we have learned to sense that others in that particular culture would expect us to do. This same expectational mechanism exists in all biological cultures. (And yes, M**, I've been advised that even bacteria have their own little separate cultures.) I also have some idea as to how these mechanisms evolved, and with what commonality of purpose. But this is clearly not the time or place to expand upon such a thesis."

So there it is folks. Out in the open. I'd long wondered why purpose is seldom if ever used when explaining the evolution of, for the best example, behavioral traits - as if the behaviors themselves, done for whatever temporal purposes, were purposively irrelevant,
And voila, as for purpose in evolution (at least in the publicly intellectual view of things), it's nonsense. Did I mention this guy was an evolutionary scientist?


Friday, January 01, 2010


So if, as the man said, you can't get there from here, how did we all get here from there?

Or, subtitled, Free Will Doesn't Come Cheap


But hopefully we've come to the point where we understand that whether or not the universe be unfailingly deterministic in nature, we and all other organisms are programmed to assume that actions are necessary to fulfill whatever has or has not yet been determined as to their consequences.


Yet there remain those who would argue that regardless of nature's initial programming, biological entities may themselves be programmed to be less than free in their responses - most if not all actions being chosen for them by the confluence of their genetically determined responsiveness and the circumstances that require them. Which would seem to come down to questions of the relative degree of freedom that organisms are under the illusion they are able to operate from. Seems a bit silly when looked at that way, but there it seems to be.


So I choose to forge ahead with some of the inference I've drawn from this mishmash of willfulness or willful-less in any case. And to re-use some of the commentary I had occasion to make elsewhere on the cybernet in response to some of this silliness.


Part of the confusion here is reflected in one respectable writer's comment that people may associate free will with randomness because they think (or want) free actions to be unpredictable. But he added that unpredictability does not require indeterminism - deterministic events can be “random” in the sense that they are not explicable in terms of predictable patterns of events.


But in my view this is just more philosobabble, because it's not about people wanting their actions to be unpredictable, it's about people worrying that their actions aren't predictable by virtue of their own choice! We want to know we have the freedom to predict with some accuracy. We want in other words the right to determine to at least some degree the nature of our own futures, if only for our own protection. And the other side of this coin is that we want the freedom to make such actions unpredictable by others if we so choose to exercise that form of will. And not only do we want nature to have given us that right, we want the freedom to use it in comparative abundance. And it may just be this "wantedness" that has effectively driven the evolution of our intelligence, or all forms of intelligence for that matter.


I responded publicly to other writings, such as some regarding "self-theories in the construction of free-will." These proposed that differences in our dealings with responsibility for our acts were essentially between the "fixed" traits of entity theorists (who tend to respond to difficulty by relinquishing agency) and "malleable" or dynamic traits of incremental theorists (who tend to react by reasserting their agency).


About which I wrote that some of the so-called fixed traits may only be "fixed" with regard to the upper levels of abstractive reasoning available. Which will then affect the self-confidence required to assert power over events that those with lesser abilities may not develop.

All of this based as well on heritable personality behaviors that (fixed or otherwise) reflect strategic approaches to anticipated events on a dominance hierarchy scale - and that also mesh with a continuum scale of competition versus cooperative strategic preferences (or partially heritable skill sets).


Well it was a response on my part that seemed like a good idea at the time.


I went on to say that rather than self-theorists, perhaps they could simply be referred to as self-strategists. Because behavioral traits can be at once both fixed and malleable - the differences in degree involving a variety of determinant circumstances.

And these same writers also talk about the traits stemming from theoretical beliefs, rather than vice versa. The beliefs in turn often self-chosen, relative to free will and deterministic conceptions.

But I'd argue the reverse: that the "theories" in question are more likely determined and or generated by the initial extent of flexibility in the individual array of heritable traits.

And if (as they might still argue) there's cognitive construction of belief involved independent from this element of causation, I'd argue that a prominent factor in this would be the effect of the cultural environment in which one's otherwise internal strategies were operative.


In other words a deterministically religious society develops a range of predictive behaviors consistently different from those found in a society that stresses freedom of thought and action, yet the individual dominance hierarchy strategies can still prevail.


Further, while it has seemed that our instinctive analytical functions are predicated on nature being essentially deterministic in its purposes, I'd argue that our predictive apparatus works from the premise that nature's purposes are more determinate and decisive. In other words the cognitive concept of predetermination (that has to have been a more recent construct) was not likely a mechanical element of life's initial predictive mechanisms.


Accordingly on one forum I wrote as follows:

"We unconsciously assume that nature has a plan, as well as that nature's actions are not only intentional but can be at times directed toward us in particular - and that these intentions in general are evidence of the particular plan that nature has made or has had made for it. So it's not that we feel events are predetermined so much as we feel events are part of nature's planning.

The difference here is this - that we feel ourselves as part of this process, in which we are expected to react to nature's probes in ways that will then allow or even require nature to revise its plans for us (either collectively or as individuals) accordingly."


So that with respect to our approaches to the "free will" problem, the differences seem to lie in the various ways we have learned to interpret nature's apparent (yet ultimately illusory) intentions. And thus to someone who felt that "lesser" animals didn't concern themselves with determinative forces in choosing their actions, I responded as follows:

"Yes, but to the extent that any animal can be said to have a rational component to its cognitive system, its emotional brain will turn to that component for "predictive" assistance - and to the extent that "rational" analysis turns to the functional culture from which it derives its 'learning,' the determinative nature of that culture will have an effect on the emotions. And these emotions, as the final arbiter of choice and action, will in some sense have taken the measure of its "freedom" into account.

And there are different varieties of "determinative" algorithms operating in any biological system that direct interaction among individual entities as a "cultural" force. Quorum sensing for example is to some extent "taught" by the particular bacterial culture extant to its group survival."


Asked for clarification of the "yes, but" proviso, I posted:

"Basically I'm arguing that the freedom each individual or group of individuals has to make decisions, consciously or just through some primordial sense of awareness, is limited by the scope of optional choices available to it through both its heritable strategies and those learned through the immediacy of its experiences.

So when I said "but," that was in reference to what I regard as fact that all life forms have considerations that equate to "freedom of will" to take action built into their functional apparatus, whether they can in any sense conceive of the nature of choice and freedom or not. And this is not limited to what we regard as brained animals. Some or perhaps most single celled organisms have no discernible "brain" area, yet calculate, learn and choose - such learning as a group being necessary for acquiring resistance to antibiotics as one example. (And I also subscribe to the theory that all life forms react as if nature's forces were potentially aimed at them, the purposes for which the organism had fashioned its programming to be wary of.)"


I might add that all of these organisms, including ourselves, tend to operate as if they are a part of whatever plan nature is operating from at that moment. However, it's likely only the humans that contemplate whether we can willfully change the plan counter to whatever "will" we conceive to be the extension of some natural purpose."


And that's how on this day I got from there to here, burning all bridges in my wake.


Thursday, December 17, 2009

I Just Discovered This Guy

and perhaps all's a bit more well in the world after all.

Monday, November 23, 2009


The Awareness Apparatus

OK, it's about time I posted something representing a bit of my own "philosophie," and so to start it off, here's a copy of a comment first posted at Jonah Lehrer's blog, The Frontal Cortex, http://scienceblogs.com/cortex/2009/11/reverse-engineering.php

It might make more sense to read the blog post first, which was about Reverse-Engineering, where computer scientists were attempting to artificially recreate the brain, but here goes my take on their chances anyway:

"The brain is a piece of meat that is, metaphorically speaking, operationally aware of itself - a self-actuating choice making mechanism that can use such awareness to apply and direct energy by its own choice. It can determine its own options or add to its available set through its own feedback assessment system.
Our human brain, in short, has awareness of its own operational purposes. We simply don't have that yet in our computers.
All a computer would arguably be "aware" of are the programs in its memory, and of the results of whatever it is required by those programs to add to that memory. There is no awareness of the meaning of the symbols outside of that memory, no ability to seek out and retrieve sensations that it could use in its calculative processes before converting the input to symbols that mean something to the computer itself, and that would allow it to make choices through an assessment process outside of the restrictions of its programing.
Not that one could never be constructed that could - but only because one can never say never with any degree of certainty.
Posted by: royniles | November 23, 2009 4:07 PM"

So what is this "awareness" anyway, especially if not quite the same as consciousness (a subject for another time and place). Because it would seem that for life to form at all, such forms needed to "see" their own existence as problematic and thus to have a need for an awareness of some aspect of their metaphorical "self" as a functional necessity. But with catch 22 being that some element of molecular awareness seems needed for awareness to exist at all.
So is awareness in some sense a property of all energy systems? And as a consequence self-activating choice making mechanisms - i.e., "life"- can use such awareness constructively? Because the very need for choice implies there's an action function available to apply and direct that energy accordingly.

Awareness would then seem (at least to me) to be a functional aspect of memory. There arguably would be no awareness as we understand it (or think we do) in a system with no necessity for an accessible memory for purposes of computation. And the period of retention of sensory input needed for such operational necessity would seem equal to the "awareness" needed to make the calculated choice. We think of feeling "awareness" as somehow both instant and constant, but we may feel it only after a delay between input to memory and the calculated choice that triggers an action. (Or not - this glimpse of the muse has been far from clear.)

But so far it would appear that awareness begins with the retention of any of the forms of sensory submission that a calculative process must use to make a choice between or among its own set of options - with some form of "memory" retainment perhaps applying to any information driven process that can be made to detect a signal to get the ball rolling.
Returning me to what I'd posted at the start about the brain as a self-actuating choice making mechanism that can use such awareness to apply and direct energy by its own choice - and can determine its own options or add to its available set of such through its own feedback assessment system.
Letting us see again that our human brain is a machine aware of its own operational purposes. And in my view representative of all living and calculating entities in that respect.

Now as to that action function I mentioned ----------

Saturday, November 07, 2009

This Performance Speaks For Itself.

Or speaks for me in any case.



From the 1973 Live Album, Land of Make Believe,
Chuck Mangione Quartet with the Hamilton Philharmonic Orchestra
& Esther Satterfield, vocalist.

'Nuff said.

Friday, October 30, 2009


Is there a Mutual Exclusivity Paradox in this picture?

As noted in Wikipedia, in evolutionary biology, group selection refers to the idea that alleles can become fixed or spread in a population because of the benefits they bestow on groups, regardless of the alleles' effect on the fitness of individuals within that group. Reference was also made to recent group selection models being seen not so much as as a fundamental mechanism but as a phenomenon emergent from standard selection.
As one can see from earlier posts here, I've had my own ideas and questions about group or multilevel selection as fully representative of evolutionary mechanisms in the sense that Darwin and others to follow have used that phrase.

Notably I have had online disagreements with Dr. D S Wilson, a noted proponent of group selection as characterizing such a mechanism - and especially with his contention that the mechanism selects for behaviors that he and like colleagues have seen as a set of separate genetic traits, while I see these "traits" more simply as heritable algorithms (think "instincts") that carry a range of strategic behavioral options.

Except that more recently Dr. Wilson has re-posted some of the items in his Huff Post series to ScienceBlogs (Evolution for Everyone), and now concedes that we may be dealing with conditional strategies instead of fixed genetic traits, but claims that his computer models will still confirm that the evolutionary process the groups contribute to remains the same as far as selection for different behavioral traits is concerned.

Which has attracted some new commentary, and one relatively anonymous poster in particular may have put the final kibosh on this whole rather silly enterprise. While we aren't told who this person is, the stuff is nevertheless in the public domain, so I'm going to quote some of the relevant comments below. (The Wilson stuff he's commenting on is the same old, same old, of course.)

So I quote:
"Dr. Wilson,
But you haven't commented on whether an organism using altruism as a conditional strategy also has the option of using selfishness as a conditional strategy, and if so, at what point does an organism use one of these strategies so exclusively that it becomes, in effect, unconditional and measurable or identifiable as that individual's dominant trait - so that for purposes of your theories, the individuals with that trait achieve dominance in a group at one level, and their progeny, who purportedly inherit that trait, will as individual trait carriers, lose that dominance at another level where the particular strategy has (according to the models) lost its effectiveness.
In other words, why is it posited that the groups are seemingly selecting for individual organisms that carry the particular trait, rather than selecting for the traits that will then come to the fore in the new alignment of individuals? So that in the latter case, we would not have so much of a change in the phenotype as more simply one of the phenotype adapting its behavior to the group culture?

Put another way, there's clearly a mutual exclusivity paradox involved when you tie the effectiveness of the trait to the individual that carries it, rather than tie the effectiveness of the individual to the choice of that trait in a particular group dynamic.
Posted October 29, 2009 1:54 PM"

(There had been no response to the above, and apparently or partly for that reason, the same person responded to another installment in the series as follows:)

"" group selection is a form of natural selection that results in adaptations at the group level, when and if it occurs."
Could you have made that any more ambiguous? Natural selection has to mean more than an individual either being selected into the group, or selecting itself to be part of that group. Isn't it rather meant to refer to some evolutionary change of some biological structure, a change that would be subject to further change, but not to a reversal of that change (such as un-joining the group, for example). Now I suppose the phrase would refer to a super-organism as well, but does this include the type where the change is reformative but not irreversible - that wouldn't really be an evolutionary move, now would it?
And since we're at bottom here concerned with human evolution, you aren't applying the natural selection label to individual adaptations within the group that are temporary or subject to their reversal, are you? Because again, with humans at least, that will not have become an evolutionary process, but a behavior modification process where situations cause anomalous developments in individuals. Although I'll grant that if these developments are then heritable, would be examples of the group as a mechanism for directed rather than random selection.
Otherwise, and again with humans in particular, if an example of evolution by natural selection, group selection would arguably involve an evolutionary process that causes phenotypical change at an exponentially faster rate than the ones we have actually been able to account for in the discoverable history of our phenotypic evolution!
So again, could you really be talking about either a short or long term behavior modification process, and if so wouldn't that be as much an evolution (using the term loosely) of culture as of the biological process you seem to be an advocate for? The more likely biological aspect of the process then being, in essence, another example of adaptation to environmental changes - interaction with groups virtually always being an element of any changes in environment?
Posted October 29, 2009 8:44 PM"

So while personally I might have said things differently, I might not have said them better, or at all. And as far as I'm concerned, any configuration that purports to effectively function as a group "selection" process is not likely to offer more than a variation of evolutionary adaptation, whether to a new environment, a new cultural milieu, or as in most cases, to some combination thereof.

Or, and in my view even more likely, it will be a facilitator of what some refer to as the Baldwin effect, which occurs, in theory, when a biological trait becomes innate as a result of first being learned. And I'm of the opinion that much of the motivation behind these experiments is to find an alternate way of explaining how such traits might have so successfully evolved without that learning.


And already some are referring to the "mutual exclusivity paradox" as if it's familiar terminology and somewhat old news. And for all I know it is.

Thursday, October 08, 2009


Before You Do The Math, Do The Strategy

Let's now get down to the more serious side of life and its confusion of strategies and look at a famous example known as The Monty Hall problem, a mathematical probability puzzle based on the American television game show Let's Make a Deal:

In the Monty Hall conundrum scenario, where you, as a game show contestant, have picked one out of three doors with no prior knowledge of which one conceals the prize, your odds of being right are approx. one in three. Then if the host, Monty, who KNOWS (even if you don't know he knows) what's behind the doors, opens one of the two you didn't pick, and it turns out to be also the wrong door to the prize, should you, when now given that option by Monty, switch your pick to the third door?

Most say that the odds are no longer 1 in 3 but are now 50-50, meaning there's no "probability" difference between the remaining picks. But no less a person than the brainy Marilyn vos Savant has advised her readers that the player should switch, after which math mavens said she was nuts.

But to me the key element here is that Monty, when he opened and eliminated one door, obviously knew in advance which door to eliminate - so luck had nothing to do with his move, and seemingly changed nothing as far as the overall odds were concerned. But Monty's move did in fact change the odds for you, the player, who in effect have been given two choices rather than one, and this was no doubt the intent all along.

Except your new circumstances also tell us that while you had a one in three chance initially, it now appears that you have one in two chances to get it right. Causing many, if not you in particular, to feel the choice to move again or not move again will on average have the same result. So if you decide to do nothing, you have in effect made the second choice to not take a chance that you already took to begin with. Not pushing your luck, so to speak.

But what you won't likely consider is that in fact there's still a 2/3rds (approx.) chance you CHOSE WRONG initially. So one more choice (which has now become simply a switch) in this situation ON AVERAGE will pay off more often than not. It would now seem you are actually exchanging 1 in 3 initial odds for (approx.) 2 in 3 reconstituted odds of CHOOSING RIGHT if you switch. Because it also appears that your chances of twice choosing wrong have been reduced to 1 in 3!

And even more to the point, if given the "second move" option WITHOUT THE REVEAL you would NOT have had the impetus to take that second step. You would not have had that clue as to which way to move when taking it. And now you have both the clue to the move and favorable odds for making it!

Which, as I hinted earlier, are opportunities you likely don't know you have. The kicker being that, paradoxically, that same clue could have given you the idea that BECAUSE of the "reveal" you would not improve your odds at all by moving. Which would appear to have been Monty's tactical intention as well. So if you were thereby manipulated into thinking this latter view was the correct inference to be drawn, you would have been had.

And as far as I've read up on this subject, the most perspicacious have observed it's because of Monty knowing what was behind each door that the conundrum not only existed but was imminently solvable. Which I'm now of the opinion is wrong as well. Because the key to its ultimate solution was not simply that Monty KNEW this, but that he REVEALED that knowledge to the player.

So that while all of this is ordinarily viewed as a math problem, it's actually, in my view, a strategic conundrum with its mathematical aspects concealed (as strategies are wont to do). It is the strategic plan here that makes use of math while at the same time being structured to deceive both players and observers in ways that can't be constructed or adequately explained by that very mathematical process.

Do I really think any of the above makes sense? Well, since most everyone now seems to agree with Marilyn vos Savant that one actually should switch in this situation, this effort of reasoning "backwards" from there to sort things out strategically says (to me at least) all this is surely part of the how and why such a puzzle was formed. Because my premise was and still is that all conundrums have at bottom a strategically deceptive purpose.

(And from what I've since read, those Let's Make a Deal players who actually switched benefitted from the switch on a 2 to 1 basis, except there seems to have been no attempt to determine the odds involving how many would or would not take advantage of that opportunity.)

Wednesday, October 07, 2009

The Parrot Who Would Rather Be Loved Than Feared.




Evidence perhaps that animals will explore first with suspicions held in reserve while at the same time asking others to confirm their lack of aggressiveness or guile? Strategies played against each other's, with context and circumstance giving further direction to the game.

Or is it simply that animals choose visible prospects of pleasure over less visible possibilities of pain?

Thursday, October 01, 2009

Toward an Explanation of Skepticism and Critical Thinking

My required self-adminsistration of some grains of salt.




Omne Ignotum Pro Magnifico:
Everything unknown is taken for magnificent.


For an encore:

Toward Open Mindedness

Sunday, August 02, 2009



First Cause or First Strategy?

A question prompted by musings on strategy, such as this latest thought:

It seems necessary that in defining what we mean by "life" we recognize that all life forms need to have some learning capacity to predict and expect future events.
The initial forms had to learn to seek out energy. It's not likely that the first such forms didn't learn by happy accident that the first acceptance of energy was something that bore repeating. And when its source inevitably waned, had to have "learned" that effort had become needed to search out more or receive it from a different source.

So shouldn't we now use a definition more as follows:
Life: A self-sustaining chemical reaction (or energy system) with strategic parameters and learned expectations. (Predictive becoming redundant in that context.)

Prompting what some will see as the greatest of all heretic suppositions:
Is it possible that the universe itself is no less than a collection of infinitely short term strategic operations all acting with the most immediate of purposes that form an infinite tapestry of unintended and unpredictable longer to longest term consequences?

I made some notes as a guide to what I hoped to add later, and then decided why not just store them here in the interim and see what that action does for my creative process.

Meaning of the universe, or in the universe, or to the universe ??
The universe seems to be a vast questioning organism, strategies developed in a trial and error series, that resulted in incremental advances in the nature of new questions structuring the strategic functions as they multiply, with what we have had to label "energy" as both their material and their "fuel" (hence the hypothesized entropy of the available supply? - which may in reality be inexhaustible?).

"Cause and effect" could just as well be the metaphorical first question, first problem, first option, first strategic choice, first strategic decision leading to next question, in an exponentially expanding omni-dimensional pattern of a trial and error progression throughout all of time's metaphorical existence.

Whatever potential the universe has had for the application of natural law, that law would have developed in concert with the strategies that found them controlling - laws that concurrently gave these strategies their operational parameters.

Rather than string theory, etc., think strategic geometry. Think a constrained randomness.

Nature of universe not predetermined, not deterministic, but yet deterministically random with an unpredictable progression of incremental inevitabilities.

Or not.


Saturday, July 11, 2009


Moron Strategies?
We seem to have arbitrarily labeled certain opposing behavioral strategies as separate traits under the assumption that the behaviors are in themselves satisfying rather than (or perhaps because of) being means to a satisfactory end that the supposedly divergent traits can have in common. We thus fail to see that the so-called traits may in fact represent different levels of strategic abstractions devoted to the attainment of the same overall goal. Example in point, the designations of selfishness and altruism as traits that have a separate genetic origin - as if there is a gene for one that is somehow separate from the gene for the other.
Except that it seems more likely that any perceived differences between the way individuals seem to express these "traits" are due to differences in the way they perceive the problems to be solved rather than in more discrete differences in their genetic tool kits.
The differences may just as well be in the heritable range of their strategic options, just as some have differences in the range or levels of abstract conceptualization in general. Because the ultimate goal to be attained by "altruistic" behavior can often be the same as one that a different entity has learned to seek out through a more direct or "selfish" process.
The abstractions used may also reflect different assessments of short versus long term necessities and/or perceivable and thus expected consequences from use of the respective strategies. Plus each strategy will also have some elements of the direct and indirect approaches, as that's why we call the process "strategy" to begin with.

Tuesday, June 30, 2009


It's My Copy, Right?

Musings to put on the record as mine, and whether for my good or my bad, from no-one else's muse that I know of:

Life: A self-sustaining chemical reaction (or energy system) with predictive expectations.
These are purposive expectations, the interaction of which is causative in forming a strategy or strategies, as well as in adjusting all of these, as expectations are changed through trial and error experience. Their goal(s) would initially be to acquire and restore energy - because the impetus to attain that goal is thus purposive.
The strategies that develop from the mix of purpose, expectations and environment can account as well for abilities of organisms to have divergent traits and be amenable to natural selection.

Excerpted from comments about The Origin and Evolution of Life on NASA's Planetary Biology Program website, http://cmex.ihmc.us/VikingCD/Puzzle/Evolife.htm:
"Interactions between various substances and energy yielded the autocatalytic systems capable of passing information from one generation to the next, and the thread of life began."

So is a life form also to be defined as an entity that needs or has a strategy for the survival of its species? Is not this "information" a component of strategy - does not a strategy require information as a determinant for action as well as information being the determinant of strategy?

And then can we not, for fairly obvious reasons that I intend to expand on at a later date, add to our definition as follows:
Life forms - self-sustaining strategic entities. Their forms and constituent elements are all a part of that strategy.
The implication being that evolution involves a selection for successful strategies as much as for anything else.

So if added to the initial definition here, we get something like this:
Life: A self-sustaining chemical reaction (or energy system) with strategic parameters and predictive expectations.

Putting it (in my view) succinctly:
Life forms are strategic constructs. The forms shape their strategies and strategies in turn shape their forms. The accidental juxtapositions with time and place shape their differences.

And tell me that red crest isn't a strategic component of that little moocher on the upper right.

(Proviso: This was not intended as a semantic retooling of the autopoiesis or "auto (self)-creation" concept - which, although supportive, is not in my view similarly concerned with the reciprocity of strategic processes, or with the underlying strategic intelligence dynamic that I will be proposing.)

And then there's Carl Zimmer's book on parasites, Parasite Rex : Inside the Bizarre World of Nature's Most Dangerous Creatures, where reviews have his descriptions of how adaptive parasites are. Once considered similar to other predators, it's now evident they have developed the talent of making "prey" come to them. Unlike animals we are familiar with, most parasites have greatly varied body forms as they go through the phases of their lives. It was first thought these changes were representative of different species, until it was recently demonstrated that "these creatures changed shape and function dramatically as they changed living environments."

In short, it seems we have in these parasite behaviors some of the best examples available of strategic forms matching up with other such strategic structures, all manipulating each other and themselves in the process, some gaining an advantage (or so it would appear) but nevertheless with all strategies and their material entities evolving - and clear evidence that intelligence is the driving force of evolution, as strategies are nothing if not intelligent.

And here's a teaser, as a portent of things to come:
http://www.micab.umn.edu/courses/8002/Rosenberg.pdf

Thursday, June 18, 2009


Bacteria with Culture?

I'm presently reading the marvelous book, Hierarchy in the Forest, by Christopher Boehm, where on page 224, he refers to humans, and to variations in their phenotypes, as "dealing with a species that, through morality, can radically manipulate its own behavior." (Having inferred elsewhere that morality represents a system of cultural values shared more by humans than by their ancestral primates.)

But isn't that another way to describe how all us animals use intelligence to adapt our strategic "phenotypes" to new situations? And how organisms purposively (using their versions of "intelligence") drive their own evolutionary adaptations - or how they affect the odds of the selective process, whatever it will turn out to be?

Because we humans didn't invent morality. Morality in effect invented us. Life's basic strategy is one of "moral" choice making. Simply put, whatever reliably increases odds for survival is seen or felt on some level as right, prudent, and therefor moral. Whatever we have learned to trust as viable strategies, whether cultural or instinctive, have become the foundation of our human morality, (Often ironically hardening the concept to represent the essentially immoral, when changing circumstances should have otherwise required changing strategies - or reversing the labels when referring to the identical strategies of a predator or successful competitor.)

And coincidentally, I've just found this related article, Scientists Show Bacteria Can 'Learn' And Plan Ahead, at: http://www.sciencedaily.com/releases/2009/06/090617131400.htm

Excerpts: "Their findings show that these microorganisms' genetic networks are hard-wired to 'foresee' what comes next in the sequence of events and begin responding to the new state of affairs before its onset."

"--- implying that bacteria have naturally 'learned' to get ready for a serving of maltose after a lactose appetizer."

"Further analysis showed that this anticipation and early response is an evolutionary adaptation that increases the organism's chances of survival."

"After several months, the bacteria had evolved to stop activating their maltose genes at the taste of lactose, only turning them on when maltose was actually available."

So, in my view, using the term "evolved" means they passed on to successor bacteria elements of a strategy that they had learned from experience. They remembered results of trials and errors and "calculated" diminished probabilities, formed expectations and reacted or declined to react accordingly, and enabled the replication of those revised strategies.

And with no ritualistic ceremony to stand on in the bargain.

(It wasn't intended that this should be a demonstration of the Baldwin effect, but a purposeful effort can serve more than one purpose. Check this article by one of our better philosophers: http://www.kcl.ac.uk/ip/davidpapineau/Staff/Papineau/OnlinePapers/SocLearnBald.htm)

Tuesday, May 26, 2009



Riddle or Schmiddle


I've made mention of Daniel Dennett earlier on my blog, but I've not read any of his books cover to cover, so can't say I'm in any position to make a critical assessment of his thinking. (What he's quoted as having said about memes, however, makes me wonder about what else I'll find.) But I did note that in his reference to a riddle demonstrating that circumstances will sometimes fail to pinpoint the "real" cause of an event, it turns out he's really showing (perhaps inadvertently) that our present conception of proximate causation is in fact illusory.
(Proximate cause: A cause which immediately precedes and produces the effect, as distinguished from the remote, mediate, or predisposing cause.)
Except the riddle that follows doesn't quite demonstrate either aspect of this conjecture:

"A case in point is the classic law school riddle:
Everybody in the French Foreign Legion outpost hates Fred, and wants him dead. During the night before Fred's trek across the desert, Tom poisons the water in his canteen. Then, Dick, not knowing of Tom's intervention, pours out the (poisoned) water and replaces it with sand. Finally, Harry comes along and pokes holes in the canteen, so that the "water" will slowly run out. Later, Fred awakens and sets out on his trek, provisioned with his canteen. Too late he finds his canteen is nearly empty, but besides, what remains is sand, not water, not even poisoned water. Fred dies of thirst. Who caused his death? "

Dennett argues that there aren't any facts shown that will settle the issue.
But I presume to say otherwise.
Because the "real" question to ask is who caused Fred to die of thirst?
And that person was Dick, because regardless of what Tom did earlier, or Harry did later, it was Dick that fully emptied out the water. Harry's intentions in that respect were not causative as the water was already gone, it's future rate of leakage no longer to be determinant. But neither were Tom's intentions a part of the reality chain, because even if Fred had drunk the poisoned water, he might have died of the poison, but not of the thirst that Dick both intended to happen and was the proximate cause of that happening. Within the given set of facts, he was the most immediate producer of the given effect.

So I think at least two things are demonstrated: There is always, by definition, one pivotal cause that can be found proximate to an observed effect. But that even while you think it's been found, something or somebody may have hidden their own proximity to the effect.. Tom, for example, may have seen Dick pour out his poisoned water, and found it wise to keep silent - that silence nevertheless an act of omission that aided and abetted Dick's purposes. And did Harry really fail to look for signs of leakage?
So do we then revise or clarify the definition of proximity to exclude such sins of omission?Otherwise all events have the potential of being discovered, in some yet undiscovered manner, as proximate to all others.

What also seems going on is an attempt to confuse causality with responsibility, and that gets us into the right versus wrong category which has more to do with the influence of intent on causation than does proximity. But as we have seen here in our riddle, the links between cause and effect may not include intention. As Omar might have said, the moving finger can write to no particular purpose.

Would Tom have been right to keep silent, for example, or did all have some deviant role in promoting the hatred? And even if any of this had influenced Dick to act on his intent, it was Dick that made the real and final choice that determined and caused the nature of the death. Did he determine the entirety of its causation? No, but he put the events that sealed Fred's fate in what was to be (barring the intervention of "lucky" accident) their inalterable sequence.
Adding more reasons to assert that causation as a function of proximity IS as of now an illusory conceit.
And who would ever expect the false premise to be hidden in the question?

(And who would have thought I'd go to these ridiculous extremes to prove I solved the riddle?)

But to prevent this effort from being a total loss, here's a verse from the Rubáiyát that begs me to include it:

And that inverted Bowl we call The Sky,
Whereunder crawling coop't we live and die,
Lift not thy hands to It for help - for It
Rolls impotently on as Thou or I.

Sunday, May 17, 2009


Maybe I'm Wrong But

Some of this carp from evolutionary biologists or sociobiologists about altruistic and selfish genes and related strategies has been getting on my nerves, as I've commented elsewhere. It's much too reminiscent of the behaviorists, and their theorizing - observing, rationalizing, testing, hypothe-sizing more or less in that order - with less concern for the why of the matter than the what.
Such as when first they assume altruism must be genetic, then find what appear to be "sacrificial" genes in microbes - and while they haven't found the same in humans, the assumption is that where there is sacrificial behavior, the genes must be there. (http://mbe.oxfordjournals.org/cgi/reprint/msl016v1.pdf)
Case of this foolishness in point, their analogous comparisons between microbial, insect, and animal strategies. But microbes can evolve almost at will with strategies fitted to, or activating, small differences in physiology, resulting in one such strategic form of the species carrying out specific duties that benefit the survival of the species as a whole - the choice of carrying out the needed and specific assignments not being an option. The version of this dynamic in insects takes longer to evolve, and with more distinctive differences in the physiological aspects of these strategies, often with main or central physiological entities [queen & drone?] carrying the gene pool that reproduces all the rest.
And microbes and insects don't know or anticipate their fates in advance. What we and some other animals with our multiple choice short and long term stratagems may see as sacrifice will be faced as simply a duty for the strategic entity that risks destruction as part of its function. It's long term-schmong term where their goals and purposes are concerned.

I'll have more to say on this later.

In the meantime, as to whether altruistic genes could or should exist, you might find this an interesting read: http://www.umass.edu/preferen/gintis/internal.pdf

Also look at this excerpt from Stanford Encyclopedia of Philosophy, Biological Altruism:
"Another popular misconception is that kin selection theory is committed to ‘genetic determinism’, the idea that genes rigidly determine or control behaviour. Though some sociobiologists have made incautious remarks to this effect, evolutionary theories of behaviour, including kin selection, are not committed to it. So long as the behaviours in question have a genetical component, i.e. are influenced to some extent by one or more genetic factor, then the theories can apply. When Hamilton (1964) talks about a gene which ‘causes’ altruism, this is really shorthand for a gene which increases the probability that its bearer will behave altruistically, to some degree. This is much weaker than saying that the behaviour is genetically ‘determined’, and is quite compatible with the existence of strong environmental influences on the behaviour's expression. Kin selection theory does not deny the truism that all traits are affected by both genes and environment. Nor does it deny that many interesting animal behaviours are transmitted through non-genetical means, such as imitation and social learning (Avital and Jablonka 2000)."

Saturday, May 16, 2009


You Sure We Aren't There Yet?

Here's the summation from latest declaration posted by Dr. Wilson at Huff Post:
'No matter how the groups are formed--at random, on the basis of experience, or through the funnel of genetic relatedness--no matter how flexible the choice of behaviors--altruism is locally disadvantageous and requires higher-level selection to evolve. It doesn't matter whether you call it group selection, kin selection, reciprocity, game theory, selfish gene theory, or anything else. All evolutionary theories of social behavior include the original problem and solve the problem only by identifying factors that enable between-group selection to overcome within-group selection. As Ed Wilson and I concluded at the end of our review article titled "Rethinking the Theoretical Foundation of Sociobiology", "Selfishness beats altruism within groups. Altruistic groups beat selfish groups. Everything else is commentary."'

Here's a series of unanswered replies from yours truly. I think my last is the closest yet to the mark I'm trying to hit.

royniles
What you actually are observing is the mechanism of reciprocity in action, and since it's built into all species genetically, with different strategic aspects, as well as being fine tuned by their cultures, what you think you have observed by these experiments and exercises fails in understanding that the process isn't simply one of calculating advantages of selfishness versus altruism. Because these calculations have already been arranged algorithmically through a species' evolutionary experiences, and are based on factors of behavior more varied than a simple selfish versus altruism dichotomy. And they will be measurably different for each species and each cultural milieu of that species. You, Trivers, and others that see these factors as the key to social behaviors are simply mistaken - but your modeling is set up to be a self-fulfilling demonstration of these prophetic assumptions.
Posted 05:11 AM on 05/13/2009

royniles
You make the mistake of assuming altruism and selfishness are separate genetically based traits. They are not. If anything, they are optional choices in a spectrum of strategic responses regulated by a combination of genetic drivers.
Everyone has access to them by individually different measure, just as we differ individually in personalities. You can assign students roles to play in games, but the fact that the results will be then predictable tells us something about the effectiveness of the strategies, but nothing much about how the actual differences in individual personalties molds these strategies in "real consequence" circumstances. And nothing much about the entire range of strategies that make up the reciprocity tool kit.
Posted 06:02 AM on 05/13/2009

royniles
Finally, this is the claim that really bugs me: 
"Selfishness beats altruism within groups. Altruistic groups beat selfish groups. Everything else is commentary."
This faux axiom is fashioned to seem the paradoxical solution to some heretofore unsolvable conundrum. Thus by presenting the paradox we are meant to infer there's wisdom in there somewhere.
But the essence of paradox is that what often passes for wisdom is only an illusion. In this one, you portray a group as something formed by individuals for mutually cooperative purposes. Yet we are then told that for such a group to be successful, a majority of individuals in that group would have to be essentially non-cooperative.
You say your models have shown this to be true, as if no other rational explanation is needed - in any case I can't find where you have analyzed the "why" for this being the case in any clear and unambiguous fashion. Simply stating it's a valid conclusion based on several years of applying lots of math doesn't eliminate the feeling that presenting such authorittive statements as a substitute for a clear and reasoned examination of the "why" of the matter is at best suspect.
Posted 09:24 PM on 05/13/2009

royniles
And did you ever consider that if selfishness beats altruism within groups, it's when the goal is not to achieve success as a group but to achieve success as an individual within the group - the purpose of the group formation itself being more or less incidental to the purposes of these individuals.
But when groups themselves compete, and the altruistic appear to beat the selfish, what you seemingly designate as an altruistic group is in actuality a group of individuals acting altruistically among themselves to create a group that will be, in effect, the more competitive. Making it therefor more selfish than the competing group of strategically selfish individuals, who by failing to cooperate for a common goal, lose as a group. 
And thus if you create groups that can work toward a goal as one individual, you have much the same dynamic in group competition as you do in indiviidual competition. When the goal is selfish, the selfish always win.
 Paradox dissolved.
Posted 03:01 PM on 05/16/2009

All life is about purpose and the strategies that evolve to best accomplish it. Groups may be formed for a purpose or find themselves in service of one by accident. Strategies are adapted accordingly. Anything not concerned with such purposes is irrelevant commentary.