Showing posts with label Operational purpose. Show all posts
Showing posts with label Operational purpose. Show all posts

Thursday, January 28, 2010


On the Dubiousness of Purpose

I got myself involved in an on line discussion of morality the other day, a subject I'd sooner have avoided, except for the prospect of being as fatuously remarkable as the next guy on that subject. And of dealing directly, as it turned out, with a self-described public intellectual. And so was I dubiously honored, with some of the commentary/evidence to be posted here as a reminder to avoid such temptation in the future. (Names will be altered to preserve my innocence.)

And so awkwardly I begin:
"M**, it occurs to me that what you've done here is fail to present your definition of morality in terms of its evolutionary purpose, instead defining it in terms of a long term goal with which our metaphorical evolution has always had a problem in abstracting from its perceptions of the immediate natures of its needs.
Rules of behavior are of course not in themselves the goals they're meant or best expected to attain. Instead such rules in the metaphorical eyes of our evolutionary apparatus were meant to serve a more immediate purpose - which some natural selective force over time could well have fashioned to fit the goal of human welfare.
Except it seems that time enough has not yet passed to witness that achievement.
Leaving us with the questions as to which strategies and tactics we're prone to use for short term goals might fit such long term purpose as well. Getting us back to a consideration of whether a focus on the nature of the purposes that fuel our expectations might make the answers easier to find.
And no, I'm not referring to the possibility of divine purpose but to purposive expectations endemic to the mechanisms of all living and choice making entities."

But then the big man replied: "Evolution, of course, doesn't have a purpose. But more to the point, to me evolution enters into the picture only early on, in endowing us (and probably other primates) with an innate sense of right/wrong and justice (as seen in the behavior of bonobos, for instance). After that, it's our ability to reflect on things that really gets ethics off the ground."

So then I say:
"M**: - I refer to evolution as purposive, and in particular with respect to the "purposive expectations endemic to the mechanisms of all living and choice making entities.'"
Then you say flat out that "evolution doesn't have a purpose" - but add that nevertheless and early on, it endows us (and probably other primates) with an innate sense of right/wrong and justice.
Which would seem to require some facility on their part for choice based on purposive expectations and the like. (Or would it not?)
Evolution then viewed from your perspective as purposive, serving what we have come to call a purpose, but unable to in any way, even in the guise of life itself, to see that purpose coming.
Talk about a category mistake, you seem to have come up with a whopper."

So then this reply from M:
"I have no idea what you are talking about (italics mine). Evolution does not have a purpose because purposes are things that are characteristic of conscious beings - so that's out unless you subscribe to intelligent design. Evolution "endowed" us with a moral instinct simply because natural selection apparently favored such instinct in a limited form in certain species of social primates. So?"

So I say:
"So natural selection favors certain outcomes but in retrospect did so to no purpose. Ridiculous.
No I don't subscribe to intelligent design by some entity separate from life itself, nor do I subscribe to the Neo-Darwinist position which is even more magical.
I had thought it would be clear to you that with its sometimes slow and plodding trial and error ways, life has managed to engineer its own designs. But clearly I've had you wrong."

And then I add: "To recapitulate what I've proposed here as to an alignment between evolution and purpose, I had referred to evolution as purposive, in particular with respect to the 'purposive expectations endemic to the mechanisms of all living and choice making entities.' Later stating that 'with its sometimes slow and plodding trial and error ways, life has managed to engineer its own designs.' And as an aside, nothing in that view requires the assumption of either teleology or teleonomy. The predictions made by organisms are always to some extent inaccurate, but not unwitting (as teleonomy would require). They are intentional and therefor purposeful. They work in the end because they are consequential."

M** then replies to me as follows:
"As for purpose in evolution, as I said, unless one believes in ID it's nonsense. Come to think of it, even if one *does* believe in ID it's nonsense.
I have absolutely no idea what the phrase "purposive expectations endemic to the mechanisms of all living and choice making entities" could possibly mean."

And I conclude this exchange with:
M**, if you were really interested in knowing what purposive expectations and the like might mean, you could simply google the phrases.

Adding to all in general: "-- of course moral behavior is relative to the particular circumstances.. It's based on what we have learned to sense that others in that particular culture would expect us to do. This same expectational mechanism exists in all biological cultures. (And yes, M**, I've been advised that even bacteria have their own little separate cultures.) I also have some idea as to how these mechanisms evolved, and with what commonality of purpose. But this is clearly not the time or place to expand upon such a thesis."

So there it is folks. Out in the open. I'd long wondered why purpose is seldom if ever used when explaining the evolution of, for the best example, behavioral traits - as if the behaviors themselves, done for whatever temporal purposes, were purposively irrelevant,
And voila, as for purpose in evolution (at least in the publicly intellectual view of things), it's nonsense. Did I mention this guy was an evolutionary scientist?


Monday, November 23, 2009


The Awareness Apparatus

OK, it's about time I posted something representing a bit of my own "philosophie," and so to start it off, here's a copy of a comment first posted at Jonah Lehrer's blog, The Frontal Cortex, http://scienceblogs.com/cortex/2009/11/reverse-engineering.php

It might make more sense to read the blog post first, which was about Reverse-Engineering, where computer scientists were attempting to artificially recreate the brain, but here goes my take on their chances anyway:

"The brain is a piece of meat that is, metaphorically speaking, operationally aware of itself - a self-actuating choice making mechanism that can use such awareness to apply and direct energy by its own choice. It can determine its own options or add to its available set through its own feedback assessment system.
Our human brain, in short, has awareness of its own operational purposes. We simply don't have that yet in our computers.
All a computer would arguably be "aware" of are the programs in its memory, and of the results of whatever it is required by those programs to add to that memory. There is no awareness of the meaning of the symbols outside of that memory, no ability to seek out and retrieve sensations that it could use in its calculative processes before converting the input to symbols that mean something to the computer itself, and that would allow it to make choices through an assessment process outside of the restrictions of its programing.
Not that one could never be constructed that could - but only because one can never say never with any degree of certainty.
Posted by: royniles | November 23, 2009 4:07 PM"

So what is this "awareness" anyway, especially if not quite the same as consciousness (a subject for another time and place). Because it would seem that for life to form at all, such forms needed to "see" their own existence as problematic and thus to have a need for an awareness of some aspect of their metaphorical "self" as a functional necessity. But with catch 22 being that some element of molecular awareness seems needed for awareness to exist at all.
So is awareness in some sense a property of all energy systems? And as a consequence self-activating choice making mechanisms - i.e., "life"- can use such awareness constructively? Because the very need for choice implies there's an action function available to apply and direct that energy accordingly.

Awareness would then seem (at least to me) to be a functional aspect of memory. There arguably would be no awareness as we understand it (or think we do) in a system with no necessity for an accessible memory for purposes of computation. And the period of retention of sensory input needed for such operational necessity would seem equal to the "awareness" needed to make the calculated choice. We think of feeling "awareness" as somehow both instant and constant, but we may feel it only after a delay between input to memory and the calculated choice that triggers an action. (Or not - this glimpse of the muse has been far from clear.)

But so far it would appear that awareness begins with the retention of any of the forms of sensory submission that a calculative process must use to make a choice between or among its own set of options - with some form of "memory" retainment perhaps applying to any information driven process that can be made to detect a signal to get the ball rolling.
Returning me to what I'd posted at the start about the brain as a self-actuating choice making mechanism that can use such awareness to apply and direct energy by its own choice - and can determine its own options or add to its available set of such through its own feedback assessment system.
Letting us see again that our human brain is a machine aware of its own operational purposes. And in my view representative of all living and calculating entities in that respect.

Now as to that action function I mentioned ----------