Question Submission 13

Please add your question as a comment to this post.

31 Comments

In his paper, Professor Wimsatt provides several potential explanations for why many philosophers believe reduction is not important or over-used. One of the explanations I found particularly strong was “an emphasis on structural (deductive, formal, logical) similarities has led to a lumping of cases of theory succession with cases of theoretical explanation, with the result that discussions of reduction, replacement, identification, and explanation…have become thoroughly muddled.” This is well stated, and I believe, summarizes many of the challenges we have been discussing this semester with scientific discovery and reductive explanation. Why has there been such a strong emphasis put on structural similarities? Why is this preferred over theoretical explanation or theory?

In the beginning of Alexander Rosenberg's 4th chapter, he discusses the motives for reduction. He states that "our current theories do not constitute THE truth about the world, but they are closer to it than their predecessors..." This statement really caught my eye because I had never been told this/or thought about theories in this manner. It reminded me of the discussion we had with Professor Love about being taught THE scientific method and that this is not the correct way of teaching. Are teachers making the same kind of mistake? Because I remember being taught that this is the way things are and not being told that this is the way they are NOW but can change in the future.

In section 4.3 of Rosenberg’s chapter he talks about reductionism and recombinant DNA. He discusses how rapidly emerging biotechnologies provide “a new, reliable, high-speed method of determining primary structure and its effects on biological function.” Rosenberg then correlates biotechnology to reductionism by saying, “Ultimately it is the rapidly expanding application of these techniques that stands behind the reductionist’s confidence that we can specify the chemical mechanism underlying any biological function.”

Rosenberg’s book was written 25 years ago and biotechnology has advanced significantly beyond recombinant DNA. So are today’s emerging biotechnologies, such as synthetic biology and bionanotechnology giving increasingly more fuel to the reductionist point of view? Or does the growing interest in the area of systems biology negate some of this confidence in reductionism?

In Rosenberg's "The Structure of Biological Science" (1985) he starts section 4.7 'Qualifying Reductionism' by making the claim, "Even if we traced out the complete pathway through all the causally relevant macromolecules from DNA to phenotype, we would not have yet succeeded in reducing Mendelian to molecular genetics."
OK.
Rosenberg then goes on to say, pp. 107, "The individual molecular descriptions that we can give to stand behind and specify the exact mechanism for any particular Mendelian gene are among the most important things we may expect to learn about it."
So, here Rosenberg, to me, is furthering his biased view by basically claiming that the only merit for molecular descriptions would come from the fulfillment of describing, or helping to describe, the specificity of a mechanism for a Mendelian gene.
He continues "On the other hand, we can expect no general theory of the molecular gene to provide a systematic explanation of the Mendelian gene's behavior. For it could do this only if it bore such systematic, manageable relations to the phenotypes that Mendelian genes themselves do."
Yet here Rosenberg seems to outright refuse to accept any explanation given, or one which could possibly be given, by a general theory of the molecular gene. He says this is due to the necessity of a systematic explanation of the gene, and that any systematic explanation offered by a molecular gene theory would have to use manageable relations to the phenotypes given by the Mendelian genes.
My confusion now is why Rosenberg insists on making the function of molecular genetics a bi-product of (to) the interests of Mendelian genetics, "specify the exact mechanism for any particular Mendelian gene"? Does this have to do with Mendelian genetics being a more mature science and thus taking the role of telling the "younger" sciences what to do?
Why do we even have this concept of Mendelian and molecular genetics, why not just GENETICS? Is it purely for philosophers and historians of science to make sense of this distinction?

In section 4.2, Rosenberg sketches out a reductionistic progression which he derives from research into the functioning of the circulatory system, “The story is sketched below, beginning with the chemistry and concluding with the biological function it explains.” The implication being that the progression could easily be inverted, showing a clear line from the chemical interactions to the biological functions. There has been a lot of discussion about the worth and aims of reduction in this class. I view it as a necessity and part of the scientific enterprise, but regarding the details of how it works or whether it is possible for biology to be reduced to the physical sciences I have no idea. If such a reduction where possible and did happen, would that change the course of biological research? Would changing the definition of biology to 'structures/systems of chemicals' alter biology in any critical ways?

In his essay, "Reductive Explanation: A Functional Account," Wimsatt states that "the opposition between reduction and replacement is appropriate for successional reduction, but *not* for interlevel or explanatory reduction." Why does he claim that replacement cannot occur for explanatory reduction? Is it because successional reduction has to occur in order for explanatory reduction to occur? I am confused as to why replacement is applicable to one but not the other.

In Wimsatt's paper titled "Reductive Explanation: A Functional Account" there is a section talking about using lower level cases to explain upper level exceptional cases. He then goes on to explain that when one uses a lower level to explain an upper level exceptional case, there also needs to be a definition of what makes an exceptional case. To me it seems like one is taking information from both upper and lower levels, and attempting to mesh them together to provide an explanation as to why something at a lower level doesn't fit on the upper level. My question then has to do with the methodology one uses in determining an exceptional case. Is it reasonable to say that an ideal model is in place, and then information is placed into the model in order for it to work?

In Wimsatt's paper "Reductive Explanation: A Functional Account" there is a section called "Differentiating types of reduction for a richer vision" which start explain the difference between inter-level reductions and intra-level reduction. I think I understand the inter-level but i don't get the intra-level. could it be possible to explain more the different between then and give more examples?

I am unsure what to make of the following quote, "None of the philosophers currently writing on this topic are suggesting inadequacies in the kinds of mechanisms postulated by molecular geneticists for the explanation of more macroscopic genetic phenomena" (Wimsatt 2007, 242).

As per the statement above, should philosophers of science be in the game of looking for reductions (or the lack thereof) of mechanisms, however general or constrained, rather than theories, for the reason that scientists seem to be doing so?

Intuitively this account seems correct, but I am confused as to how exactly mechanisms relate to theories, laws, and explanations in biology. Is there a formal account? Must mechanistic accounts be reductive in character (where we have a scheme somewhat like mechanisms within systems of mechanisms), or are they reduced forms of more general, non-mechanistic theories? Aren't then mechanism just as complicated creatures to reduce as theories? Is the hope that the mechanistic account provides a deeper understanding of the phenomenon to be explained (by the original theory) by being more sensitive to boundary conditions, environmental details and context?

What ought we make of cases like the precise mechanism by which enzymes lower the transition state energies of molecules in a reaction. Scientists are not quite sure whether enzymes work by stretching substrate bounds, or decreasing the mobility of the substrate, or re-orienting the substrate in an optimal position for the reaction to move forward or some combination thereof. Would a mechanistic account here be a reduction, but not a formal theoretic one?

My question is just more or less clarification on something I did not understand, but I thought it was important to ask because it ties in with the Schaffner article we read a few weeks ago. In Wimsatt's article (p.254), he says "While Schaffner (1974b) has questioned whether trying to accomplish the reductionist program per se is a good scientific strategy, I suspect that he (and perhaps many scientists) believe that it is at least a secret hope or end." I'm not sure what he means by this..."a secret hope or end." To me that implies that the scientists are relying on reductionism to solve certain problems, but perhaps Wimsatt disagrees with this interpretation?

I agree with you. I thought it was nice to see some explanation between scientific discovery and reductionism. I think it's important to make those connections.

In “The Structure3 of Biological Science” Rosenberg invokes the fact that Mendelian genetics cannot be reduced to molecular biology, and that synonymous terms between the two sciences are lacking, as evidence that a theory of reduction does not hold water. But, doesn’t this in actuality show that there exist fundamental problems with the theory of Mendelian genetics while not providing evidence supporting the repudiation of the reductionist account of science?

I was struck by the difference between a static view of science and a dynamical view of science and think this distinction can be used to understand two very different philosophies of science. Wimsatt says, "In a static view of science, identity claims and corresponding claims of correspondence only may be empirically indistinguishable. But in a dynamic view of science, only identity claims can effectively move science forward." (2007 p 269)

When I put on my static hat I am led to wonder about what the best current theory/model/mechanism of some phenomena has to say about what the world is like. This is because I am imagining that this theory/model/mechanism is Correct, that the investigation of that phenomena is complete, and so this is as much as we are going to know about it. Here I think claims about parsimony and ontological deserts are understandable.

When I put on my dynamic hat I see claims in even our best account of some phenomena as working hypotheses useful for more inquiry. These hypotheses still seem to be making claims about a way the world could be, but I am inclined to put off the issue of what the world is like until 'the end of inquiry'.

Seeing that the dynamical view of inquiry as useful for understanding the historical nature of inquiry, I wonder what the use of the static view of inquiry is and what it is that is so appealing to so many people? I think it is related to the psychological desire we seem to have to think that there is some understandable way the world is and that we are getting closer to it. History has so corrupted me into a dynamical view that I am having trouble seeing the alternative.

"A context-dependent translation is an incomplete translation." We discussed how much of the terminology used in Mendelian and molecular genetics is context-dependent. Wimsatt argues that translation of such terminology from one level to another is incomplete, and thus undesirable. I understand him to be arguing for the failure of a successive reduction of Mendelian genetics to molecular genetics because of variant context-dependent terminology in these disciplines which resulted in a replacement of the former by the latter. First, is this agreeable? Second, if it is, assuming a successful attempt were made at generating context-independent terminology for both fields, and consequently a successful successive reduction was achieved, would an explanatory reduction be inferable? Alternately, if it is not, why is my interpretation discordant?

In Wimsatt's essay, "Reductive Explanation: A Functional Account," he talks about successional reduction and replacement. He states that, "Replacement and successional reduction are opposites. But for explanatory reductions, replaceability is closer to and is by many treated as a synonym for reducibility." I don't understand why replacement and successional reduction are opposites. They both work to change theories don't they? I'm also confused about how replaceability works with explanatory reductions? Is Wimsatt saying that replacement doesn't occur for explanatory reductions, but instead a very close synonym of replacement occurs? Wouldn't that technically mean that replacement does indeed happen for explanatory reductions? Or am I completely wrong here?

In class Dr Wimsatt referred himself as both a reductionist and anti-reductionist - arguing that there are phenomena that are reducible and that there are phenomena that are not. I think this needs to be furthered fleshed out. That is, even if some theory is reducible to another - this may not always be desirable - as the level at which we are trying to do whatever is what we are trying to do may call for a different analysis. That is, I suggest a pragmatic reductionism (mixing Watters commitment to reductionism, and the possible uses of reductionism) - reduce when reducing is useful and conversely even if ideas/theories are reducible use the non-reduced version of them if its easier (non-reduced theories may include variables that are easier to measure, track, etc). This is akin to what the assigned reading class Optimal Strategy.

Secondly, it appears that some phenomena may not be explicable in terms of its constituent parts (whether it is theoretical possible is perhaps irrelevant, because we may be limited by out mental capacities and the observable phenomena). In a sense, it seems to me that our attempt to fully reduce all science to the smallest parts would lead to confusion. I much prefer that we explore nature without a preconception of reducibility (that is, without deciding whether the phenomena being studied, can, in principle be reduced. This view is similar to that of Feynman, who was asked if he was looking for the ultimate laws of the universe, and suggested that he is merely trying to find out more about the world, and the more he finds out the better. Feynman added, that if it turns out that there are some basic laws then these would be nice to discover but that nature could be like an onion with endless regularities that are only applicable at certain levels.

My question is thus why do we need to answer the question of reducibility, rather than encourage a plurality of approaches - some of which attempt to explain all the world through quanta, some which look at different levels, and then make pragmatic choices from the discoveries scientists make.

If Rosenberg to abandon the Nagelian ideal of theory reduction and admit that Schaffner's corrections could be made, what type of and (in a rough sense) how many corrections could be made before a replacement has occurred rather than a reduction? I am skeptical that a reasonably equivalent Mendelian genetics could be derived from molecular genetics, but I think that I may be more open if some outline of the distinction between reduction and replacement without simply pointing to physical examples could be made clear. Would it be possible to relate the terms in such a way that wholesale changes to Mendelian genetics haven't been made?

I think some of the advances at first fueled the belief in reductionism; however, the failure to deliver the goods that were imagined (lions with wings, if you will) have shown that the picture is far more complicated. For example, pieces of DNA cannot be directly translated into proteins - as they are comprised of both exons and introns. Furhter, there are post translational changes which affect how proteins arise (RNAi can silence of transcribed genes from being expressed). It seems that many developments have made it more difficult to explain all phenomena by referring only to genes.

We hear less of genes for something. And more about development, the context in which genes find themselves (genomes, other organisms of the same and other species, abiotic conditions, etc).

In Professor Wimsatt’s “Reductive Explanation: A Functional Account,” it is discussed to what extent reductionism fits into the general scope of scientific inquiry. Wimsatt states, “While Schaffner (1974b) has questioned whether trying to accomplish the reductionist program per se is a good scientific strategy, I suspect that he (and perhaps many scientists) believe that it is at least a secret hope or end (p. 254).” Wimsatt continues on to discuss that explanation is the primary aim of science. Three different possibilities for seeking explanation of a phenomenon are given: how the phenomenon may be the product of causal interactions at its own level, at lower levels, or at higher levels. In regards to the first case, Wimsatt states that most phenomena may best be explained in terms of other phenomena at the same level. If an intra-level explanation is not found, one may look at higher or lower levels (the second and third cases).

Relating this to whether reductionism is a ‘secret hope or end’ for many scientists, it would seem that the level in which an explanation is found would dictate whether reductionism is indeed an end-product of the scientific inquiry. However, this doesn’t necessarily mean that reductionism is a driving force in the initial scientific inquiry as Schaffner may claim, but may instead be a by-product of the final result. To what extent, then, can reductionism be used as the means to end in scientific inquiry? Is it more of an ad hoc feature of scientific practice (as argued by Hull)? Are the implications for a possible reduction the same or different in the two cases?

In Wimsatt's piece, he describes four biases that are implicit in forwarding the standard model of reductionism:
1) assuming lower-level theories are more general/explanatory
2) distinctions between context of discovery and justification
3) laws over causal factors/mechanisms
4) and problems of translatability

I'm wondering how much of the baggage here is due to these biases being intertwined with the layer-cake model of the world? Are these dilemmas in reductionism also transferable to the other "sciences"? Or is the shift in ideas around reductionism primarily coming from biology? I was thinking about the idea of emergence, and originally I thought it was applied to the emergence of new species or properties of new species, but now I'm thinking the implications of such an idea are vastly more profound: assuming the the world is a "layer-cake" (which it is not), does emergence not only claim that higher levels of organization cannot wholly be explained by the immediately preceding layer, but also that each layer would have a corresponding "emergence" of a model for reductionism? This would have serious implications for the social sciences...

“But they are also the core techniques in the theoretical revolution that has converted reductionism from the rhetorical to the literal as a description of biological method. Ultimately it is the rapidly expanding application of these techniques that stands behind the reductionist’s confidence that we can specify the chemical mechanism underlying any biological function. For this attitude is no longer a millennial hope; it is a medium-term expectation.” For once I feel like someone said what I have been feeling all along; we can physically see reductionism. My question is why are people so hesitant to accept reductionism in biology? To me, it seems much easier to understand the concept as well as teach and pass on the information when it is reduced.

This is an interesting comment and I think it gets at a real problem with science education. I think that in most high school (and, unfortunately, some college) science courses, teachers present the current scientific consensus as "the truth," and do not consider the fact that the majority of scientific consensuses in the past have been falsified. An accurate depiction of science is not being given unless it is depicted as a dynamic and progressive enterprise.

I think it is important not to look at science as a means to discover the truth, but rather, simply a way to become more knowledgeable through the process of discovery. I'm not sure what Rosenberg meant by THE truth, but to me this should not be the focus of scientific discovery, or the philosophy of it. This may be the issue with the scientific method. Too much focus is put on the method itself, and the laws it creates, that the meaning behind the discovery is lost.

I think this particular thread is interesting. First, look at Will Bausman's response earlier in this thread to get a good idea at what it means to look at science in two different ways (dynamic vs. static).

I'm not sure I would go as far as some on this issue. It is very difficult to interpret current scientific theories as THE truth. But, this must be distinguished from using them as progressively better and better heuristic tools for manipulating and understanding at a local level the world around us (see Wimsatt, 2007). To me, this implies a certain open-mindedness about our set of current scientific theories, without having to dismiss a single one of them as false. This is because we are certainly limited and fallible beings, but it is not the case that we in fact know which one of these theories we employ is false. As a matter of fact, a lot serve us very well. Not to mention that taking this perspective doesn't necessarily throw all of our explanations, theoretical, reductive or not under the bus. Nor should it throw all parts of any past of present false theory or explanation away.

Without saying too much on this issue, I think it is important to realize that our scientific methodology, including our sensory instruments, computing instruments, and manipulating tools have become much more accurate and precise. In that sense, that provide a better means of verifying theoretical predictions (e.g. we can now verify the existence of things we cannot directly observe like electrons, something previous generations did not necessarily have the power to do with theoretical entities like phlogiston). If so, I think our theories are better than they once were, though this still does not constitute the truth. In that sense, science seems to persist in ruling certain things out, not necessarily guaranteeing THE truth.

I think your question is interesting because I was wondering the same thing. I think I understand inter-level but I don't quite know what intra-level means...
I guess I am left wondering where the distinction of intra and inter is made...

I think that Rosenberg’s view of molecular genetics given here is simply in relation to reducing Mendelian genetics to it. I don’t think that he would claim that this is the only merit of molecular genetics, especially considering he goes into detail on the accomplishments of molecular genetics (regarding hemoglobin specifically). His insistence on discussing molecular genetics in terms of its relation to Mendelian genetics is simply because that is what is relevant to his argument here, not because he believes it is useless otherwise. What he is saying is that the explanations that molecular genetics give are not the same type of explanations that Mendelian genetics gives. The descriptions of phenotype production in molecular genetics are clouded and complicated, whereas Mendelian genetics gives relatively simple relations between genes and phenotype. One could argue that Rosenberg has oversimplified the relation of Mendelian genes to phenotypes, but I believe his point still stands that the appropriate simplicity for explanatory power is lost in molecular descriptions. Thus, we cannot reduce Mendelian genetics because its explanations serve a different function (which would also give a strong reason for keeping the two separate, instead of conflated into an umbrella term). I think that Professor Waters would also point to great differences in the method of investigation of the two disciplines to justify their separation.

This clarification would help me as well. I feel marginally confident in my understanding of the "secret hope or end". Does it point to the tendency of reductionism to funnel the sciences into physics? And that, in many cases, the purpose of reduction seems to try and break events and identities down to physical properties and interactions? I think the "secret hope" is to have all of the sciences explained by the properties and interactions of physics. And that reductionism is the best method to achieve the fabled 'grand unifying theory'. Achieving such a thing is what I believe is meant by "the secret hope", not only "solving certain problems", but solving certain problems by reducing them to physic's. This is how I interpreted it, but I am not sure if it is what Wimsatt intended.

So are today’s emerging biotechnologies, such as synthetic biology and bionanotechnology giving increasingly more fuel to the reductionist point of view?

To me it seems that some of today’s emerging biotechnologies reinforce the reductionist’s account while others further reinforce the bulwark responsible for some refusing to accept reductionism. This is because some emerging data undermines what is held by the scientific community to be fact, narrowing the scope of knowledge which leads to problems in a reductionist account by limiting the applicability of this account as well. On the other hand, I think some new technologies yield support to reductionism as it further clarifies, supports, and increases resolution of the reductionist account. It depends on the science and the situation.

To answer this question in the most basic manner, you should just look at the two as boundaries, and intra-level reduction has to do with the issue of race.

Would changing the definition of biology to 'structures/systems of chemicals' alter biology in any critical ways?

That's an interesting question, and one I've thought about often over the course of this semester. I don't think it would alter biology in any way, since you'd basically be reducing the definition and not changing anything about the reality in which biology operates. Saying that biology is the study of organized chemical systems isn't changing the reality of those systems, only how we define the study of them. It would also be helpful to compare this definition to a traditional one - that biology is the study of living organisms. Would replacing 'living organisms' with 'chemical systems' change the way biology operates? I would say no - you're just using a reduced/alternative definition.

>

This caught my eye too when I was reading Rosenberg. I agree with Brooke about the parallel between this and teaching of the scientific method, but my big problem is that it makes me have no faith in present science. If he is claiming what exists now isn't the truth- where is the hope that we will ever actually get at the truth??