Saturday, March 21, 2015

Is the Constructal Theory By Adrien Bejan Really Proven or Provable? A challenge to Constructal Theory

I want to preface this article by stating that my attempts to obtain copies of Dr Bejan’s primary reference works, e.g. the peer reviewed articles which highlight his theory, have been unsuccessful, as these are not publicly available. In one site I have to pay $70 to even look at the paper, (which I will not do) and in others it simply states “submit your request for a download, no copies are available”.


Adrien Bejan is a professor of Mechanical Engineering at Duke University. He postulated constructal theory in 1996.
According to Bejan, the constructal theory states… “For a finite-size flow system to persist in time (to live), its configuration must evolve in such a way that provides greater and greater access to the currents that flow through it.”.

Constructal theory is a theory which is and continues to be, vigorously touted by its adherents as being scientifically verified and which has “growing scientific support”(Wikipedia). There are numerous web sites dedicated to the theory itself, one of which is here http://www.scoop.it/t/constructal-design, and which indicates for example the number of peer reviewed citations of Dr Bejan’s theory per year, which it claims are above 10,000. These days, that is considered one of the primary measures of success of any scientific theory, its citation count.
And in January of this year, the American Association of Physics Teachers held a conference in San Diego CA http://www.aapt.org/Conferences/wm2015/session.cfm?type=plenary that had a plenary session on Constructal Theory, so it is apparently accepted in teaching curriculum.

So my questions of the constructal “law” are the following:
  1. The theory is based on a principal or statement, that appears defined, and yet is not definable in terms of actual physical values. What for example, would be the physical values that the theory would input and what would be their outputs, are these in units of force Newtons or dynes or pounds, (energy) joules, seconds, and/or moles of atoms or molecules, or densities Kg per unit volume?
  2. I find that the underlying premise is reducible essentially to another, simpler, physical argument (or arguments) already known. It is based on the path of least resistance principal of physics, which is further reducible to Newton’s laws of motion. An object (a body) will continue in its path until it encounters a force resisting its motion. Bodies will naturally take paths of least resistance, as no object or mass will take a path of higher resistance. This very basic law of physics would describe at a microscopic level, virtually all of these phenomenon that are described as being newly encompassed by Bejan’s “constructal law”. Furthermore, we note that Newton’s original postulate is an argument that does contain measurable quantities, Newton’s (N) and mass (kg) but also displacement per unit time (m/s). Based on this evidence, it would appear that Dr Bejan has anthropomorphized an already existing law, that of Newton’s law of motion, since it is more easily, and more accurately, reducible to another already in existence.I don’t doubt that Bejan’s theory does describe phenomenon which themselves are encompassed or encapsulated in terms of known physical units. The point is that the tendencies of matter to take paths of least resistance, Newton’s law of motion, is a more simplified version of what Bejan’s theory is founded upon.  At the center of these laws is of course the conservation of energy and momentum. One would argue that matter behaves the way that it does because of energy conservation, also a principal that has defined physical quantities associated with it. But of course these do not entirely describe behavior, since if we had only the passive transfer of energy and absolute conservation in systems, we would not have the second law. There are other underlying principals of Bejan’s theory which rest in Fourrier’s theory of heat loss and Fick’s law of diffusion of particles. But the fact that particles in an enclosed volume will find by collisions, a stable lower energy state of few collision per time than more, is very intuitive and can be easily seen with billiard balls and mechanical diagrams.These underlying physical principals are key to understanding I believe, what the limiations of Bejan’s theory actually are and how it is defined in terms of actual physical values. Because the theory itself is not stated in these terms, it is instead a very generalized principal.It should not be too surprising that the basic physical laws of motion would always apply universally to every situation possibly imagined, to lungs, rivers, sand dunes, heating and cooling, (as these are particles transferring heat) and to all others. There are no exceptions. Supporters of constructal theory seem to be surprised that their theory applies in so many ways and under so many different conditions, but as we can see, these tests are really only the more simplified laws themselves proved out again and again.
But the real test of Bejan’s theory, as in any theory, is its applications, its claims. Does it account for some new behavior which is currently unexplainable? And one example of a constructal theory “application” is found broadly in its claim of the equivalence of animate behavior with inanimate behavior. This assumption, is another critical reason why this theory in my opinion is incorrect and undemonstrated. The fact that this point alone has not been tested scientifically is another question but an important one. It is this critical assumption, and that is what it is, an outright de facto assumption, that leads to the very broad conclusions of this theory in the specific area of the animate/inanimate problem.
Many examples are provided which purport to show how the theory links together or shows universality between animate and inanimate behavior. The aerial photos of alluvial fans for example are compared to the branching patterns in the lung of an animal. Key to this comparison is to understand why I made the points above 1 and 2. We can see that simply because one thing is true and physically demonstrable, it does not mean that perhaps circumstantially, another principal must also be true.
Bejan is absolutely correct that the alluvial patterns behave according to the laws of physics which dictate by conservation of momenta and energy, the intricate flow patterns of water against a resistive material, in this case mud and silt. These are entirely natural patterns, the water does not “know” that it must “form” a more efficient flow pattern, the efficient flow pattern is dictated by the conservation of momenta and energy, because it is taking the path of least resistance. The water is composed of particles, we may approximate these in a force diagram, as tiny billiard balls, but the water itself always is governed by the force pulling it downward, F=mg and the resistant normal forces” opposed to it by the silt and mud of the river bottom. The entire problem of liquid flow, though so complex it would likely have to be modeled in a computer, is in fact reducible, in an theoretical form to the force diagram and the simple collisions of billiard ball against a stationary wall or other object. Even with the complex factors of hydrophobicity and molecular stickiness, each body” must take a least resistive net path as it moves. That is the beauty of physics, its reducibility. But in this theoretical analysis, say examining the river/bank interactions one cubic millimeter at a time, we can reproduce a complex river pattern from these microstates of well known particle behavior. My argument is that theoretically, the flow pattern of water is a mass of known density Kg/Liter and velocity not unlike the flow pattern of tiny spheres, particles, down a hill. These conform to Newtonian physics and for this argument their deviation is irrelevant or not in question. An accurate model of flow is not specific as Bejan’s claims are so universal virtually any model.
However, I believe the problems of this theory are more fundamental. And the important point is that none of these claims of engineering physics, have any bearing on Bejan’s claim that these interactions somehow explain or even suggest, the equivalence of animate with inanimate behavior, an equivalence that is somehow assumed by the theory itself. Are we to believe that the flow of water against a bank, a microcosm of this model, is somehow equivalent to a living process?
Likewise, if we were addressing chemical behavior of the water, its wettability, its reason for being a liquid and not a solid, its reactivity perhaps with other solutes, we would have to have consistency between the particle model and the macroscopic behavior observed. The point here is that Bejan’s theory is reducible to another set of theories of particle behavior, the laws of motion of macroscopic bodies and atom sized bodies, and chemical laws.
I am therefore not certain, how a constructal theorist can claim that this is a tested theory. Or even if constructal theory is definable as a theory. That is, if it meets the burden of a scientific theory. It must be capable of describing phenomena, testable phenomena, that cannot be encompassed by an existing simpler theory. What aspect of alluvial fans or lung branching are not encompassed by the physical and chemical theories of particles already established before the 19th century?
Constructal theory claims to describe far ranging phenomenon. For example it is stated in a peer reviewed paper, [again appearing in Nature.com’s publication “Scientific Reports” entitled “Cave spiders choose optimal environmental factors with respect to the generated entropy when laying their cocoon” by Eliodoro Chiavazzo et al.], that spiders make webs in specific locales i.e. in caves, because the web location is at a more preferred energy state than to others, in other words the web patterns are “chosen” by spiders in such a way that it conforms to the flow of energy based on constructal theory. This basic assertion is not supportable if one considers the problem from a different angle, one less biased, in which the spider is not first presumed to be complying with future energy constraints. The paper already assumes that the web structures, in the greater system of the spider, are at lower energy, and does not consider the energy required to build the webs themselves (I address this point in more detail elsewhere). Whereas it may be true that the locations of the web are the most energetically stable, relative to incoming winds etc., this fact does not necessarily connect to the other suppositions of Bejan’s theory or support these contentions. More broadly, we have to consider again, the smaller components which are underlying the physics that explain sufficiently, the inanimate particle behavior. For example would we expect that a beaver is building a dam structure that is unstable? Are the stacking of logs and twigs and mud, going against the “rules of engineering”? Beaver dams are already well documented as engineering feats, which hold up for decades. I think this path of logic is a precarious position. Because in fact, that kind of logic has been applied and shown to be untenable in the biological world. The number of behaviors by organisms, which do not conform to reasonable usages of energy are more. And you could look at the tendency of organisms to become larger for example or even such basic behavior as reproduction. There are in my mind, many easily asked or thought about counter-examples that this theory apparently avoids. How about a spider not making a web at all? Can you actually prove that a bear or other animal, climbs over a mountain, because that is the path of least resistance? Or, do birds take flight because flying is actually simpler or lower resistive than standing still? The constructal theory advocates I’m sure can (and apparently do) imagine all types of scenarios to demonstrate their new theory, and yet it only takes one exception in science to nullify these results. It is usually the simplest question that is the hardest to answer.
And you will not find a single example of a paper by constructal theorists which proves why, for example, a bear might run instead of sleep, a very obvious contradiction to the “organisms build in energy efficient places” but you will find very specific pre-determined conclusions that fit models which really could not be proven some other way. i.e. we already know that spider webs are not built near traffic or in exposed areas, so this data is obvious, or are foregone conclusions, and yet what is assumed by constructal theory is that this cannot be deliberate action on the part of the spider [does a spider not sense winds speeds, noise, predators, and threats?], but must be thermodynamically pre-determined.
And there are other citations claiming to utilize constructal theory in far ranging applications as cancer biology (http://www.nature.com/srep/2014/141024/srep06763/full/srep06763.html)
which purport that constructal theory applies there as well. An article in Nature.com “Scientific Reports” entitled “A thermo-physical analysis of the proton pump vacuolar-ATPase: the constructal approach” 2014 by Umberto Lucia et al., discusses how constructal theory makes such lofty claims to not only account for tumorogenesis or transformation but also cancer cell growth. I excerpted the following to attempt to make sense of it: “..Consequently, the analysis of the flows through the cell membrane appears fundamental in the comprehension of the biophysical and biochemical mechanisms inside the cell12. But this kind of analysis is powerfully described by the constructal theory. Indeed, by referring to the constructal law, a living system presents two characteristics: it flows and it morphs freely toward configurations that allow all its currents to flow more easily over time13. Life and evolution are a physics phenomenon, and they belong in physics16. Constructal law is a new approach introduced in thermodynamics in order to explain optimal shapes of natural structures13, 14, 15, 16, 19, 20, 21. The fundamental bases of the Constructal law was expressed21 as follows: “For a finite-size flow system to persist in time (to live), its configuration must evolve in such a way that provides greater and greater access to the currents that flow through it.”.
If there is a constructal law, what is this law? The variables for evaluating the theory are not definable as written. Moreover, in this paper, there is never any negative hypothesis, in other words what happens if the hypothesis isn’t true? Do the results still agree? These basic elements are completely absent in this paper. And the problem is that you can’t write the thesis of constructal law in the hypothetical:
  1. “If a finite system is persisting in time, it’s configuration won’t evolve? Or,
  2. For a finite system to persist in time, its configuration will evolve but the flow will not increase.
None of these are possible phrasings that would lead to a possible negative outcome of the claims, and thus be testable. Cancer cells either are or are not responsive to the presence of a specific gene. Cancer morphogenesis is correlative to certain genes, as are all drug-cell interactions. And further, physical arguments the law of gravity, laws of motion, Maxwell’s equations, laws of diffusion, are all in theory, testable and must defeat the negative hypothesis.
My thesis in a grant project was essentially to show that a specific drug molecule, a chemotherapeutic, would be better encapsulated by a host molecule for the purposes of improving its pharmacokinetics. It had myriad implications to improving the therapy or not. So the criteria were carefully established, and because it was real science, the thesis could be shown false or it could be affirmed. That is how all scientific theories are structured.
What I find, and what we must conclude, is that this constructive ‘law’ has a series of presumptions built into it which must be first presumed to be true in order for the predictions to hold, that is, it must be presumed that a volume of an organism is increasing or increased because a larger volume is more beneficial for moving a mass through water (as submarines do). In other words it possibly predicts that a slight advancement will occur, or a change in position, because the final state “will” be more conducive to flow? The future conditions some how pre-determine the present? It is easy to dismantle this kind of theory logic by simply asking What will the physical system do at time t when it is not a (larger volume?). If the answer is that it was already a larger volume, then you have the problem of how it became larger in volume to begin with.
These claims specifically, that constructal flow theory or law is providing incite into the growth of cancer, are too difficult to parse or simply do not provide evidence of how these observations are not explained by known cellular processes or genes. It is not too surprising that these examples in cancer cells do not account for the basic logical problem of how some claims or predictions in the constructal law must be pre-supposed in order for the theory to hold. If anything, they simply obfuscate further by focusing on details of already known cancer morphology. As we have discovered, “pre-advantaged states” of greater flow, would have to dictate a change in the present state of flow. The stated law as written, is impossible to test.
And if, for example, it is clearly shown that the wavelets and other physical phenomena are more easily explainable by particle theories already in existence, why should we assume that constructal theory should somehow provide exceptions to biological phenomena?
Many of these constructal theory papers delve very quickly into formula. Which I will not bother to reproduce here. It should not be surprising that differential equations might be found that apply to cell growth or to chemical behavior. However, if the constructal theory, the thesis of their paper, cannot be proven as written because of logical and definitional issues in the central hypothesis, the differential equations (roughly 3 pages worth) are meaningless, and further highlight the lack of evidence, of the how future states can pre-determine present conditions. If a theory states that a system will do something, cancer cells will grow, in order to become more conducive to flow, you have stated that future states (e.g. the more optimized cancer cell shapes) are determinant and dictating present ones. This logical deduction flows directly from the argument. It is easily tested as I have shown by allowing the present state to exist for some test interval, dt and seeing what then must happen).
These are new findings that are the “take home” message from this analysis. And these are relevant in spite of the claim that there are numerous citations in peer reviewed journals apparently supportive of constructal theory, as these are new issues that have not been discussed previously. Although the requirements for publication as specified, may include previous citations in peer review, in the case of constructal theory, it is apparently not critical that the peer reviewed articles themselves actually support or prove the central thesis that the investigators purport to investigate.
My objective therefore is to bring into question the standards not only for which peer reviewed science is being evaluated but from my perspective, the standards which are highly inconsistent and apparently can be relaxed or tightened, provided that one has sufficient peer review clout. It is I believe, already known or should be obvious to other scientists in the field, that constructal theory is in fact, vague and undefined theory, and not new by definition of what “new” means in research, nor a theory that makes predictions that can be tested or which cannot be easily accounted for by other existing theories in a very obvious way. It is time for “peer review” as a primary standard, to be scrutinized as a primary dictate of scientific merit.
I’ve presented numerous arguments for why this theory should be scrutinized more carefully. But I believe the strongest is the way it is logically constructed. It bears repeating: “For a finite-size flow system to persist in time (to live), its configuration must evolve in such a way that provides greater and greater access to the currents that flow through it.”.
By claiming that forms in nature are somehow dictated or driven by achievement of more efficient flow patterns or rates, constructal theory proponents have predicated that the conclusions or claims are already presumed in the argument. Key to this are loaded phrasings such as “(to live)”. The term in constructual theory “(to live)” of course is not defined but is carefully assumed within the statement that purports, we recall, to account for why things are living or behaving as though they are. You can’t test such a theorem.
So I would invite Constructal Theory advocates and scientists to respond particularly to the last argument I make here, at least first, as the other arguments would take considerably more time to resolve. Does the theory itself presuppose its own claims? And how does a system “know” ahead of time, what its best and most optimal flow pattern should be? As we have discovered here, “pre-advantaged states” of greater flow, at some future times, would have to dictate a change in the present state of flow (at a present time point). That is contrary to the basic arrow of time and thermodynamics.The stated law as written, is impossible to test.










Author notes: This article also appears on "The Crisis Equation"

Intriguing Work By Professor Roger Kamm on "Emergent Behavior"

I just recently found some very intriguing research- ongoing, by Professor Roger Kamm at MIT, in what is described as the field of “Emergent behavior of integrated cellular systems.” On a somewhat related note: I have posted here- a number of articles on determinism but more importantly, the empirical applications of determinism from a theoretical point of view and what is very interesting about Dr. Kamm’s research is that it highlights this very question about the nature of causality and does so in a very real world problem area, the possible manufacture of “living machines.”
Emergent behavior is described as essentially the way in which individual cells form higher functioning organs. This field of engineering turns upon the basic premise that a “top down manufacturing” coupled with intrinsic, natural processes, (for example those found in cells themselves), can lead potentially to new, artificial cells which might be directed to all sorts of activities, including making new medicines, treating diseases and even making mimic organs.
Many probably will recall the announcement by Craig Venter of the self named, Craig Venter Institute (JCVI), a few years ago, (2008) of the world’s first “synthetic cell.” http://www.jcvi.org/cms/research/projects/first-self-replicating-synthetic-bacterial-cell/overview/
Venter was approaching the problem as a geneticist who has sequenced very large genomes. If one can know the genetic code then, like synthetic chemistry, it should be possible to design organisms from the bottom up. But I believe this illustrates more key differences than the similarities.
One of the things, I believe, that differentiates Kamm’s work from Venter’s is that Kamm is proposing to venture into multicellularity. But I believe the other main difference is that he is attempting to show that cells might achieve, with proper engineering, some artificial property and this might be “emergent.” And that is the fascinating part. What is it that makes a cell property “natural” and another cell property, or behavior, unnatural or “emergent.”
In Dr Kamm’s abstract (which I am attempting to obtain the transcript of) entitled “Emergent Behaviors of Cellular Systems: Lessons In Making Biological Machines” he states the following:
Recent advances in synthetic biology, developmental biology and tissue engineering, have raised the prospect of building living, multi-cellular machines.
Kamm is undoubtedly referring to advances in bio-engineering that have led to very advanced applications like artificial skin grafts, used for grafting in burn victims, in which human skin cells are essentially tricked into populating artificial lattices, and their immunological signatures attenuated to reduce potential for rejection. In a multicellular “machine” an organ mimic might help to produce insulin in a diabetic patient, or some kind of artificial lung graft might help produce proteins that could help alleviate breathing disorders. There would be virtually all kinds of possibilities from such a finding.
These theses, however, assume a kind of biological determinism. That, in theory pre-determined states of a biological system can determine a final state, that would be the principal behind this sort of engineering…in the same way that an actual machine is pre-designed blue print fashion, in another computer via CAD and/or software engineering.
And on this point I believe it is highly experimental, and I’m not quite convinced of what this might entail. Is it robotic or is it just an alternative form of life? So those questions definitely relate to biological and also possible genetic determinism. Kamm highlights this issue: “…The process of building a living machine, however, necessarily deviates from current top-down manufacturing procedures in ways that are both limiting and enabling.” The impetus for artificial cell machines is probably to a large extent due to recent advances in stem cells. Stem cells or pluripotent cells PC’s” to be technical, are actually thought of for this very purpose, as cells which are not pre-destined to become skin, liver, or brain etc., but in theory, can become what the medical practitioner triggers them to be. Kamm would take it a large step further, even from where tissue engineering is attempting to go now, by looking at integrating various cells to work in a collective whole which he believes is much more powerful (the analogue he uses is to “organoids” or clusters of PC which show collective behavior).
Clearly living machines are still at a very basic level currently, and still far from applications, but these raise intriguing questions. Kamm’s idea’s of, for example “co-locating” various multicellular components, in order to assemble working structures, is not vastly unlike what Venter achieved with a single cell, the co location of single, cellular components; a core of genetic material (nucleus) transplanted into a receptive, empty cell membrane. In theory it appears possible, the issue is how these might be “enabling but also limiting” and this issue of it being enabled but at the same time potentially inherently limited, is directly related to potential implications of biological determinism.
Can a biological ‘machine’ be made and if so, does this require that key functions in it are pre-determined? It would seem to be a new paradigm for how an organism might grow.
Here is an excerpt from the Research.gov site for the National Science Foundation’s Center For Emergent Behaviors of Integrated Cellular Systems which Dr. Kamm is Program Director (found here). https://www.research.gov/research-portal/appmanager/base/desktop?_nfpb=true&_windowLabel=assetSummary_1&_urlType=action&wlpassetSummary_1_id=%2FresearchGov%2FResearchAsset%2FPublicAffairs%2FCenterforEmergentBehaviorsofIntegratedCellularSystems.html&wlpassetSummary_1_action=selectAssetDetail
 “The Center for Emergent Behaviors of Integrated Cellular Systems (EBICS), based at MIT, seeks to understand cells and their environment, and how these cells work together to incorporate biochemical and mechanical cues to perform a wide variety of functions. The center’s approach for constructing biological machines is similar to the engineering techniques employed in making non-biological machines. Many Center staff members are engineers, and they think like engineers, building machines up from individual parts.
The research could have dramatic applications in industry, medicine, energy and the environment, among others. Biological robots in an assembly line, for example, could repair themselves and adapt to optimize their performance; new “organs” could be designed and implanted, with the ability to sense drug or glucose levels in the bloodstream, and respond appropriately by turning on or off drug secretion; organisms could swim to an oil spill, and consume the damaging substance, replicate if needed, then swim home to the host ship for processing; smart plant-based machines could release the correct amount of controlled energy to produce heat, light or mechanical work.
Creating living systems with important new roles raises ethical issues that center scientists work to address. “Will these machines be endowed with the capability to self-repair, adapt, and self-replicate?” says Center director Roger Kamm. “If so, they become indistinguishable from natural organisms and need to be considered in a similar light. If stem cells are used, from what source may they be taken? What protections and regulations need to be in place?
What strikes me is how these “machines” would interact with other biological organisms. If we are to imagine, swimming biological robots that clean up oil spills, and then return to the ship for example, how would they do against a natural open ocean super predator, like a whale, or smaller fish? How would these machines fight off bacterial attack in open water? Bacteria attack virtually any substance known. I came across a proposal (in a grant) recently in which bacteria are being fed paint stripper as food, that would be dichloromethane. And bacteria it turns out, feed off of crude oil and have been doing so for probably as long as it has been oozing up from the Gulf of Mexico. So it seems that many of these problems of the “biological machines” described by this group, would in theory, be virtually identical to those faced by natural organisms.
Should we be concerned about potential ethical issues? When you think about it, our world is chaulk full of non-natural functioning, living systems or environs. For example, we don’t have the very natural plagues that would be expected to naturally be present in a non-vaccinated society. Or of rampant illnesses spread because we lack molecularly engineered anti-biotics or anti-viral vaccines. We live in a biologically engineered society, which is to a large extent, a society inherited and built, from the ancient practices of designing the environment we live in, not limited to the selective breeding of livestock and food crops to produce higher yields, nor the physical structures and systems that surround us.
Deliberate cross breeding of plants was likely done tens of thousands of years ago, (I would guess at the outset of agriculture) though the original engineers obviously lacked the sophisticated tools of today. Yes, there are issues with GMO”s particularly to free market, and the potential risks of proteins made from engineered crops or other products should be evaluated. People should have the right to access natural organic sources. However, these aims are not incongruent with better technologies that might lead to new, and unexpected breakthroughs in medicine. These are some radical ideas directed toward unmet and unsolved medical challenges, (artificial organs, cancer) so perhaps out-of the box thinking is not only justified, but required. I look at most of our more advanced health advancements, and see that these are directly resultant of more knowledge about science, not less. It should be noted that organ mimics referred to by Kamm, might in fact replace animal testing altogether. So I believe we will deal with the ethical limitations of the engineering as the advancements come about, and their potential benefits or risks can be evaluated, not before.




Author notes:
[Material from "The Crisis Equation" blog]MKK