But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.We are absolutely unqualified to discuss or even comment on the details or the neurobiology discussed. Indeed, even the author himself doesn't provide any sort of explanation of how brains actually work, using general hand-waving terms that are almost tautologically true, as when he says that experiences 'change' the brains. This involves countless neural connections (it must, since what else is there in the brain that is relevant?), and would be entirely different in two different people.
In dismissing the computer metaphor as a fad based on current culture, which seems like a very apt critique, he substitutes vague reasons without giving a better explanation. So, if we don't somehow 'store' an image of things in some 'place' in the brain, somehow we obviously do retain abilities to recall it. If the data-processing imagery is misleading, what else could there be?
We have no idea! But one important thing is that this essay reveals is that the problem of understanding multiple-component phenomena is a general one. The issues with the brain seem essentially the same as the issues in genomics, that we write about all the time, in which causation of the 'same' trait in different people is not due to the same causal factors (and we are struggling to figure out what they are in the first place).
A human brain, but what is it? Wikipedia |
In some fields like physics, chemistry, and cosmology, each item of a given kind, like an electron or a field or photon or mass is identical and their interactions replicable (if current understanding is correct). Complexities like the interactions or curves of motion among many galaxies each with many stars, planets, and interstellar material and energy, the computational and mathematical details are far too intricate and extensive for simple solutions. So one has to break the pattern down into subsets and simulate them on a computer. This seems to work well, however, and the reason is that the laws of behavior in physics apply equally to every object or component.
Biology is comprised of molecules and at their level of course the same must be true. But at anything close to the level of our needs for understanding, replicability is often very weak, except in the general sense that each person is 'more or less' alike in its physiology, neural structures, and so on. But at the level of underlying causation, we know that we're generally each different, often in ways that are important. This applies to normal development, health and even to behavior. Evolution works by screening differences, because that's how new species and adaptations and so on arise. So it is difference that is fundamental to us, and part of that is that each individual with the 'same' trait has it for different reasons. They may be nearly the same or very different--we have no a priori way to know, no general theory that is of much use in predicting, and we should stop pouring resources into projects to nibble away at tiny details, a convenient distraction from the hard thinking that we should be doing (as well as addressing many clearly tractable problems in genetics and behavior, where causal factors are strong, and well-known).
What are the issues?
There are several issues here and it's important to ask how we might think about them. Our current scientific legacy has us trying to identify fundamental causal units, and then to show how they 'add up' to produce the trait we are interested in. Add up means they act independently and each may, in a given individual, have its own particular strength (for example, variants at multiple contributing genes, with each person carrying a unique set of variants, and the variants having some specifiable independent effect). When one speaks of 'interactions' in this context, what is usually meant is that (usually) two factors combine beyond just adding up. The classical example within a given gene is 'dominance', in which the effect of the Aa genotype is not just the sum of the A and the a effects. Statistical methods allow for two-way interactions in roughly this way, by including terms like zAXB (some quantitative coefficient times the A and the B state in the individual), assuming that this is the same in every A-B instance (z is constant).
This is very generic (not based on any theory of how these factors interact), but for general inference that they do act in relevant ways, it seems fine. Theories of causality invoke such patterns as paths of factor interaction, but they almost always assume various clearly relevant simplifications: that interactions are only pair-wise, that there is no looping (the presence of A and B set up the effect, but A and B don't keep interacting in ways that might change that and there's no feedback from other factors), that the size of effects are fixed rather than being different in each individual context.
For discovery purposes this may be fine in many multivariate situations, and that's what the statistical package industry is about. But the assumptions may not be accurate and/or the number and complexity of interactions too great to be usefully inferred in practical data--too many interactions for achievable sample sizes, their parameters being affected by unmeasured variables, their individual effects too small to reach statistical 'significance' but in aggregate accounting for the bulk of effects, and so on.
These are not newly discovered issues, but often they can only be found by looking under the rug, where they've been conveniently swept because our statistical industry doesn't and cannot adequately deal with them. This is not a fault of the statistics except in the sense that they are not modeling things accurately enough, and in really complex situations, which seem to be the rule rather than the exception, it is simply not an appropriate way to make inferences.
We need, or should seek, something different. But what?
Finding better approaches is not easy, because we don't know what form they should take. Can we just tweak what we have, or are we asking the wrong sorts of questions for the methods we know about? Are our notions of causality somehow fundamentally inadequate? We don't know the answers. But what we now do have is a knowledge of the causal landscape that we face. It tells us that enumerative approaches are what we know how to do, but what we also know are not an optimal way to achieve understanding. The Aeon essay describes yet another such situation, so we know that we face the same sort of problem, which we call 'complexity' as a not very helpful catchword, in many areas. Modern science has shown this to us. Now we need to use appropriate science to figure it out.
Hiç yorum yok:
Yorum Gönder