December 12, 2007

Cities and Regions as Self-Organizing Systems: Models of Complexity by Paul M. Allen

(Sorry this is late, but I’ve came down with a case of Bell’s Palsy last week, which is a non-contagious, temporary viral infection that affects the facial nerve on one side of the face, paralyzing it. So basically, I can’t really blink my left eye, nor can I more than half-smile, and I’m talking so increasingly out of the side of my mouth I could almost stand in for Vice-President Cheney. Anyway… see you in class!)

For my second chaos and complexity book, I looked at a book called "Cities and Regions as Self-Organizing Systems: Models of Complexity", by Peter M. Allen, a British ex-physicist who was strongly influenced by Ilya Prigogine before taking up spatial economic modeling. The book was published in 1997, about 10 years ago, but is still one of the main works that explicitly develops urban complexity models and theory available to most academics. (In some ways, the work is similar to, and contemporaneous, with now-NYT op-ed columnist Paul Krugman’s short 1996 lecture, "The Self-Organizing Economy", which takes a simpler look at how feedback models create spatial patterns across inter-urban, regional and national scales.)

As a general goal, Allen is interested in creating dialogue with urban planning community, particularly trying to reframe how planners approach their jobs. At one point he venting some slight frustration current planning practices, based in “make the assumption of spatial equilibrium in modeling the 'changing' spatial pattern" (43). Unfortunately, knowing as little as I do about economic modeling, much of the mathematical content of the book took place over my head. There’s a way in which applying complexity theory to the social sciences can only be ‘performed,’ rather than explained, and the best approach to understanding the kinds of process-based approaches Allen utilizes would be to recreate the models myself, tinkering and re-creating the variability. It made me think that the clearest jumping off point for Allen’s work for our class might be the excellent network illustrations that Jeremy talked demonstrated a few weeks backed during his presentation on Linked In. In particular, both modelers are attempting to graphs how network theory can apply to everyday life, creating patterns out of a blank slate, particularly focusing on how feedback loops persist over time.

(cont. below)

Continue reading "Cities and Regions as Self-Organizing Systems: Models of Complexity by Paul M. Allen" »

December 11, 2007

Ishmael: The Ideas of Daniel Quinn

For my final text, I decided to explore the ideas of Daniel Quinn, whose work I had read previously, but found myself thinking about in new ways and with newfound urgency following our readings on chaos & complexity this semester. Quinn's most famous work is Ishmael, but he has authored a set of books -- Ishmael, The Story of B, My Ishmael, and Beyond Civilization -- which all attempt to clarify the same core ideas. I found Quinn's ideas became most clear after reading all four books and exploring the plentiful additional material on the Ishmael Community website, which includes essays, presentations, and direct answers to questions and challenges.

Ishmael (as well as The Story of B and My Ishmael) is written in the format of a novel. In the beginning, a first-person narrator meets a telepathic gorilla (I know), and most of the book consists of the gorilla leading the narrator (and thus, the reader) through a series of discussions about how humankind got to where it is today. The narrator takes the position of the naive reader, asking multiple questions (sometimes ad nausea) and in the process making visible our cultural myths, unveiling how and why humanity is no longer living in accord with the rest of the world, uncovering the origins of society's problems, and showing how we are headed toward cultural collapse if we don't change.

Core Ideas
I believe the core of Daniel Quinn's many ideas can be synthesized as:
1) Population growth is directly related to food production. All living populations -- including humans -- will grow to match their food supply.
2) As long as we produce a surplus of food (on a global scale; not regionally), the human population will continue to swell -- regardless of birth rates, death rates, standard of living, education, etc. (Click here for more detail.)
3) We perpetually produce a surplus of food because we practice Totalitarian Agriculture, which eliminates competing species, destroys biodiversity (some estimates say over 200 species a day are becoming extinct), creates massive waste and pollution, and spreads to disrupt entire ecosystems in order to produce as much food as possible. Ultimately, the increased food fuels rapid population growth, which demands yet more farming -- a feedback loop.
4) The creation of an agricultural system that produces vast surpluses is what has fueled the massive rise and spread of our culture (dubbed the "Takers"), and the cultural myths or stories that accompany it: humans are the ultimate pinnacle of the evolution of life on earth, humans exist differently and separately from the rest of nature, humans should exploit the web of life however necessary to further this "natural" dominance, etc.
5) The creation of this agricultural system and the production of surpluses is what first created systems of class -- there was now something to lock away, to horde and own, and social strata (of this type) emerged. From there, Quinn lays out how all of our civilization's problems evolved from class, overpopulation and imperial cultural myths -- poverty, sexism, racism, crime, depression, etc. He also makes a clear case for this method of agriculture and all the systems it has spawned being the cause of Global Warming.

The Great Forgetting and Cultural Collapse
Quinn claims (in a variety of ways over all four books) that for 3 million years, humans lived a very different sort of lifestyle, a tribal lifestyle governed by an unwritten "Law of Limited Competition" whereby humans hunted and farmed (in other ways) and competed to the fullest of their capabilities, but didn't obliterate other competitors, species, ecosystems or food supplies to do so. Quinn claims that every member of tribe had a specialized function and was valuable, and for the most part people gathered and worked for what they needed from day to day (rather than collecting surpluses or additional wealth) -- a process that took a few hours and left the rest of the day open to other pursuits, as opposed to the 40 hours a week for 40 years lifestyle that we burn ourselves into the ground with today. Quinn said this lifestyle worked just fine for humans, was naturally selected over millenia, and doesn't find these basic tenets to be "primitive" in the sense of cultural evolution the way, say, Robert Wright does in Nonzero.

Quinn says that about 10,000 years ago, that all changed with the emergence of Totalitarian Agriculture, which produced surpluses and exploded the population and fueled the spread of this practice and the classist cultural ideologies that emerged with it. He says we can trace the exponential human population surge back to this point, and backs this up with a variety of data from different disciplines, gathered by the United Nations and the United States, etc -- all of which point to a major change occurring around 10,000 years ago (most charts actually begin measurement at that time, because there begins to be a large enough change to measure), but without most analysts questioning what occurred then. Quinn calls this The Great Forgetting -- human history omitting the lifestyle that worked well for 3 million years because only the last 10,000 years have been well-documented, and already immersed in Taker culture.

Quinn says that the quagmire of increasingly complex global problems we are facing today are the signs and symbols of a failed cultural experiment -- humans tried this Taker lifestyle of living out of accord with the rest of the living community, and it took about 10,000 years for this experiment to collapse. As an analogy, Quinn presents the idea of someone trying to build an airplane, but whose craft is not in accord with the laws of aerodynamics. The person drives the craft off the edge of a cliff, and for some time is in free-fall. During this time the person yells "Look, I am flying! Gravity does not apply to me!" -- but soon will discover that gravity does apply to them, and in a most drastic manner. We are headed for a crash.

The Food Race and Overpopulation
Quinn states that if there is still time to avoid a crash, it will necessitate ending our current agriculture system, and the race to produce more food globally. He attempts to show in a variety of ways how the world is currently producing far more than enough food for all humans, but because our population continues to skyrocket and there are local famines and food shortages, we operate under a cultural myth that says that we need to push and push to create more food -- which he aggressively states time and time again will only fuel overpopulation in a never-ending cycle.

This line of thinking uncovers one of Quinn's most controversial claims, which is that we should not send food to starving populations in "Third World" countries; they have already outpaced the resources in their environment, and sending them food will only increase their population, causing more suffering. He says this is like pouring gasoline on a fire just because it is a liquid and we feel we must do something in the face of tragedy.

Click here for a fairly accessible (if plodding) slideshow presentation with data about some of these ideas, titled World Food & Human Population Growth. The slideshow includes quotes and findings from Jared Diamond in Guns, Germs and Steel.

New Tribalism
Ultimately, Quinn advocates for abandoning our current system of agriculture, "walking away" from our owner/conqueror cultural myths, and finding our way back to a manner of living with the rest of the world that biological and cultural evolution selected for 3 million years -- a tribal lifestyle. He stresses that this doesn't mean giving up all technology, picking up clubs or living in caves. If we are to pull away from Taker culture, our new tribal lifestyles will be something completely original, a brand new idea that hasn't existed before. Quinn rallies against civilizations and for smaller, self-sustained tribes -- classless and cooperative communities -- that create their own order based on what works best for them within the context of their environment, saying there is no one right way to live, which I see as a nod to the flexibility called for by complexity theory. Far from being primitive, Quinn says new tribalism is about living in accord with the rest of the living community, "an escape route for the billions... who slog stones up the pyramids not because they love stones or pyramids but because they have no other way to put food on the table."

One part of Quinn's argument that I wholeheartedly agree with is that all our tinkering with current systems will mean nothing if we don't find a way to address overpopulation. The Earth's population doubled from 1900-1960, and again from 1960-2000 -- even though the "population growth rate" is currently declining. (Click here for more detail.) Within the span of most of our lives, the number of humans on this planet has doubled. And doubling means billions of people. What will emerge and what will collapse within this infinitely complex adapative system?


Another Take: At Home in the Universe, Stuart Kauffman

Given the length of postings and the previous presentation of this work in class I will endeavor to keep this posting to a minimum.

I admit that I was left somewhat unsettled by the class presentation of Kauffman’s At Home in the Universe. The book was already held in such high regard by so many members of the class that I was surprised to hear (what I felt) were glaring inconsistencies with the rest of the literature encountered so far in the course. I was also intrigued by the tone of some of the quotes from the book which bordered on the spiritual—even religious. It left me wanting for clarification. A few days later I found myself in a local bookstore browsing their pint-sized science section and was surprised to find this book staring me from the shelves. Noting the serendipity I picked it up.

I found in my reading that the presenter cannot be held to fault for my initial perception. In fact he did a good job of reporting what is there. The fact is that the book almost begs misquotation. Kauffman’s insistence on mining old ideologies and long debunked philosophies to find the language with which to tackle his subject results in a text that yields pages of useful blurbs which, if quoted selectively enough, read like an endorsement of creationism. It exists as one of those annoying texts that provides equal ammunition to those on both sides of an argument. Stepping back from this assessment I concede that Kauffman is onto something quite profound here.

The issue at stake is the origins of life in the universe. Kauffman has set out to shrink the universe, or at least the universe of possibilities that has given rise to life in order to show that we are not as unique as we may have previously thought. In fact we, and all life in the universe, exist as the end result of the iteration of physical systems, laws and mathematic truths that have existed since the beginning of time. In short, life is an assured inevitability. To paraphrase Kauffman, life more or less condensed out of the handful of attractor states that simple physical systems are drawn to. The existence of these attractors within the topology of possible states (state space) of a given physical system ensures that the super-vast majority of possible states will attract to the remaining handful of attractor states.

For example, in his now familiar light bulb experiment working with Boolean networks whose nodes each take 2 inputs in order to determine their next iterative state Kauffman showed that a network with 100,000 bulbs and a thus “[state space] of 2^100,000, hence 10^30,000, would settle down and cycle through a tiny, tiny state cycle with a mere 317 states on it. This means that the calculated probabilities against the formation of life are largely irrelevant since they tell us absolutely nothing about the behavior of the physical system in question or the way that it behaves. As shown above, given a simple system governed by simple rules we can begin at any state within the effectively infinite state space and will soon find ourselves in one of just 317 configurations. We can rest assured that the vast odds against life emerging anywhere in the universe condense in much the same way since the basic reaction networks underlying all life are only slightly more complex than the model proposed by Kauffman’s light bulb model.

Here, I concede somewhat the appeals to the divine that Kauffman keeps feeding us throughout the text. If we are indeed the result of basic physics playing itself out over billions of years, iterating through simple rule sets, then the exact nature of the “rules? that govern the universe as a system become the occupation of a necessary deity. However, if the same principles of self-organization are at work on the system of basic forces as are theorized for our living systems then we may just as well be living in a godless universe. Perhaps Kauffman can be forgiven his dizzying endorsement of seemingly conflicting ideologies earlier in the book.

It is this collapsing nature of possible states that I found the most profound revelation of this book. After all, as a composer, I deal with these attractor states on a daily basis. If we sit down at a piano and play a random selection of 3 keys to produce a chord we enter a state space of 658416 (88*87*86) possible chords. If we are to place bets on the likelihood that the note collection will be, say, a G-major chord we may be surprised to find that the probability is exactly one in 288. In fact, one of every 24 randomly drawn three-note sets will prove a major chord in some key—probably not good enough odds to sit in with your local jazz trio, but still remarkably high. This sort of distillation of probability is only possible through an understanding of the mathematic framework underlying the musical system.

I think Kauffman is advocating for this kind of approach to the question of the origins of life as well as in related fields. We must first understand the behavior of the underlying system and its attractor states before we can deem outcomes impossibly unlikely. More and more often we will find that complex systems tend to inhabit small regions of their possible state space. Perhaps it is the interactions that happen within this relatively miniscule portion of the total state space that will prove most consequential. ~~J

Six Degrees: The Science of a Connected Age

I’m one of those inept bloggers, unable to properly post for this class. I’m including an early comment I had to the presentation that Jerome gave. To me, posting it now helps bring all my readings and the class presentations on CC Theory into a clean perspective.

This comment is about Jerome’s very good explanation of his extremely complex book. During class I asked a question about how the author handles people using their common sense or intuition in regards to the author’s theories, which completely discount the value of common sense and intuition. This struck me as an example of Dr. Shupe’s statement that some ideas come out of the mouth and circle around to hit you in the back of the head. To rephrase my original question, Casti uses his theory to discuss social systems like economies (with the beer distribution) and collapsing governments. As social systems, they must include people who act, often using their common sense and intuition. Yet the over all theory discounts this activity. Does Casti’s theory take these types of actors into account? Or does he completely ignore them? Either way, I think the concept comes back and hits him in the back of the head.

The second book I read for class is Duncan J. Watts’ Six Degrees: The Science of a Connected Age. The title comes from the iconic idea of everyone on earth being connected by 6 people. Indeed, Watts thinks it is possible that today humans are connected by much fewer than 6 connections. Watts is a sociology professor. Six Degrees examines the study of networks found in the real world of people; friendships, rumors, fads, diseases, firms and finances. In the book Watts uses terms like percolation theory instead of tipping point, and flexible specialization to describe interdisciplinary work. He effectively applies Barabarsi’s power laws to social networks. To Watts, understanding networks is vital in understanding this current generation of connectedness. The main theme of the book is that some problems can only be solved collectively, that one individual or even a single study discipline are not enough to resolve some issues. This book is a strong proponent of multi-discipline works, and fits in well with the MLS program. Watts’ goal is to change the way people look at the world. This is what CC Theory has done for my perspective on the world.

Watts asks a great question in the book that sticks with me. Instead of asking, “How small is the world?? he asks, “What does it take to make the world small??. He uses some great analogies like the fact that the chirping of crickets becomes synchronized without there being a conductor present to guide them. This particular example harkens back to the earlier book I read about how guppies can also make synchronized motions, learn and react to each other. Again, complex organization is found in surprising places. Watts believes that network connections work due to the clustering (overlapping) of connections as well as the average path length of the link. Hubs are not needed for small world networks to succeed; overlapping is enough. This clustering and deep connection makes the network a “small world?.

Watts believes that functioning as a small world is the best way to operate in the current world. This period of growing complexity and ambiguity calls for the use of collaboration strategies across traditional boundaries. Individuals and teams that used to work in isolation need to be connected, sharing information, crossing skills and knowledge. He also believes that to be successful a network has to be both robust and contain some weaknesses. Otherwise the network is too vulnerable to catastrophe. I think this statement rings true with the lattice graphing we’ve seen in class as being particularly strong.

Another thing Watts says also struck me about the whole idea of CC Theory. He states there is no generic “small world? model that will work everywhere in every situation. The way to solve problems, he posits, must be modified and tailored to each organization, each system, each person. Just like CC Theory is not THE answer to the workings of the world (or is it?), his “small world? theory is not a solid fit either.

Watts believes that everyone in the world is part of the same family involved in one enormous and complex network system. His “small world? notion really resonated with me and I could apply it to current topics in the news. For example: Science News magazine recently reported that geneticists have determined that North, Central and South America were all populated by people who crossed over the Bering Strait from Russia. They traveled down the coast all the way from Alaska to Chile and then spread in-land from there. These are the Native Americans, Bolivians, Ticos of the New World. Then the Europeans came across the Atlantic Ocean and the populations re-connected. We are all from the same family.

Another item I applied the “small world? theory to is the new report on Iran’s nuclear capabilities. The previous report was drafted using traditional investigation from limited intelligence community sources. The new report was produced with a new lead investigator applying many of the inter-disciplinary techniques that Watts advocates. Small groups were used, dissent was encouraged and those questions were then also studied, information was gathered from traditional and non-traditional sources, across governmental agencies. The final report was more factual than the first, with less supposition. The group looked at old information in a brand new way. To me this represents everything that CC Theory hopes to explain.



I just found this other quote from Bonnie that might be helpful and interesting

"Visualization is the process by which the brain imagines aspects of the body and informs the body that it exists. In this process,there is a director or guide...

Somatization is the process by which the kinesthetic (movement) and tactile (touch) sensory systems inform the body that it exists. In this process there is a witness--an inner awareness of the process...

Embodiment is the awareness of the cells of themselves. It is a direct experience. There are no intermediary steps or translations. There is no guide. Ther is no witness. there is the fully knonw consciouness of the experienced moment initiated from the cells themselves. In this instance, the brain is the last to know. there is complete knowing . There is peaceful comprehension. Out ot this embodiment process emerges feeling, thinking, witnessing, understanding. The source of this process is love."

I am not going to try to explain anymore, but I thought some of you might find this interesting.

Movement and the Emergence of Mind

Movement and the Emergence of Mind

One of my main questions that I wanted to answer this semester was how “mind? emerges in relation to the organization of the body. How is movement an expression of this emergent process?

I have read several texts this semester as I have been drafting a process paper for my thesis which is a creative project on cellular memory. By cellular memory, I mean the ways that our both experience and remember the patterns of the past through our bodies. In Genetics, Cellular memory is understood through the way that the DNA carries patterns of response and the physical characteristics of our anscestors. My definition includes this and extends beyond it as well in looking for the experiential basis of this phenomena. The overarching text that has emerged this semester is a seven minute sample of dance work that I have edited from seven years of research and performance on cellular memory.

One of the main modalities that I have worked with in my movement and art practice is called Body-Mind centering. There is only one text written by the creator which I will draw from a bit while also bringing in information from my experiential studies of the modality. The material is primarily a process transmitted by engaging with the lived experience of the body.

Body-Mind Centering is an ongoing, experiential journey into the alive and changing territory of the body. The explorer is the mind- our thoughts, feelings, energy, soul, and sprit. Through this journey we are led to an understanding of how the mind is expressed through the body in movement. There is something in nature that forms patterns. We, as part of nature, also form patterns. …our body moves as our mind moves…..this balancing is based on dialogue, and the dialogue is based on experience. (Cohen 1)

Body –Mind Centering uses anatomical and physiological maps as doorways to exploring the lived experience of the body. It is a process of exploring experientially through perception and sensation the different patterns, forms and qualities of movement that arise in the body. The process explores how human movement develops as a spiraling process. For instance, a baby learns to press into the floor and lift its head and see its own hand before it learns to reach. Also, in exploringthe movement of the breath I could trace a micro macro continuum from the cell to the expanding and condensing of the whole body.

One of the central hypotheses of this work is that consciousness is not limited to the brain, but is, at base a cellular phenomenon. If consciousness is understood as a process of self-organization through movement, then all cells, systems and organs of the body are implicated in this process. Consciousness, in this sense is rooted in the direct experience of relationship to the world as we are able to experience and relate to it through the integration of our movement and mental processes. Experiencing requires openess and a willingess to experience change.

All life is movement. Movement can be concieve as an expression of connection, process and change. We can feel this in our very movements of breathing, of the heart beating. Movement yields an experience of change and it is one medium through which we can respond to change. Even our walking is a pattern of continually falling and catching ourselves. It is the experience of life as it changes. It is through the experience of our living bodies that we encounter one another, eat, love, engage with life. It is through the experience of the passing of time and the living and dying of the body that I feel both the impermanence of our existence and the glimmers of interconnectedness that reveal the ways that we truly live in and with one another.

In this sense, movement has emerged for me as a way of experiencing my history as a process of change; I am inhaling because I have exhaled, walking because I have worked my way to standing by working through layers of movement patterns that make this possible. As the fulfillment of one movement becomes the source of new possibilities for the next, the experience of movement expresses for me the lived experience of continuity and change, for the iteration and evolution of patterns of engagement with my environment.

To move requires that one perceive the experience of the moving body internally and in relation to the environment. The dialogue between inner and outer forms the basis for spontaneous response and organization. We move because we are aware of and responding to ourselves and others and the environment. Movement can be perceived through proprio and interoreceptors in the body that track inner and outer movement. Smaller movements are also perceived through vibrational sensors in the skin. But at base, all movement is vibration. To perceive movement is to perceive vibration.

My explorations have centered on the connections between mind state, vibration, and movement.

From Bonnie’s writing on cellular consciousness.

“When we have an experience, our perception of that experience is an extension of experiences of the past which direct our focus and expectations; the experience dissolves at the moment of creation into memory; its energy form is projected into future experiences; and we can communicate it through symbols, imagery and metaphors which are then interpreted by others based upon their perceptions of their own experiences…another means of communicating experience is through vibration- by cellular resonance. One’s inner cellular experiences and expressions are received directly by sympathetic vibrations of the corresponding cells of others…less obvious in our culture is vibrational resonance above and below the level of frequency registered as sound by the human ear. Cellular resonance occurs outside this range of auditory perceptions, in the realm of silence. The degree of communication is influenced by the similarities between the cellular vibration of the respective people and the range of vibrational resonance of each of the people involved…Each of us listens and responds within the same range of vibration that we experience and express. In order to perceive vibration, the one listening must have access in oneself to the same rate of vibration as the person expressing. In order to communicate, one must be able to vibrate within the range of resonance of the listener…the potential of our range of resonance within this “silent? cellular communication is vast. It is this dimension of vibration that underlies and provides the background for all other forms of communication?
(Cellular Communication, The School for Body-Mind Centering)

I have explored my initial question this semester through sorting through four years of research into my own experience of cellular resonance, communication and memory. For the past seven years, I have been exploring what it means to remember experiences that I had as a very young child before I had the level of brain development to categorize and understand my experience in ways that are conceptually meaningful. I have used my awareness of the language of movement, touch, vibration and sound. Through my process, I have used these mediums to explore and make meaning of various experiences. I am exploring how the movement patterns that have emerged through these explorations are examples of forms of “mind emerging through movement?.

To do so, I took a particular experience of questioning the nature of my relationship to my parents who I lost as a young child. I took four years of research of me interacting with environments and objects that had shaped their lives. I worked with my body as a source of memory of their bodies. I documented my experiences and the movements that emerged through these explorations. My main question was, how do I experience my connection to these people who were my initial conditions?

This semester I cut and pieces segments together and looked for the patterns and iterations on patterns emerging from these explorations. What emerged were patterns of reaching and grasping, condensing and expanding that resonate the larger metaphoric meanings of taking in and giving out. What emerged as I sliced different pieces of work together to make a new layer of complexity was the suggested meanings in the different movements. The meaning of my experience of connection to them emerged as intimations of the similarity and difference between self and other. For instance, in merging my faces with theirs through slide projection, one can see an image that expresses how similar genetically we really are; i can merge our faces to look as one, yet we are two. In creating expanding movements and lining my body up with the slide of a mountain that my father climbed, I attempt to express the resonance between inner and outer form as his form echoes in my own.

This process has represented for me an experience of the emergence of mind through a process of iterating movement into greater forms of complexity over time. From this process, memory transformed into metafor and universal meaning arose from the particular experience. What started out as me questioning how I could experience my relationship to my parents evolved into the very movements through which I can experience relationship to anything: openness to experience of continuity and change and sensitivity to initial conditions? Cycling between movement improvisation and free writing I sculpt different layers of perceived complexity and follow the continuum between perception, sensation, personal meaning and metaphor. The process itself is and experiment in complexity, a continual dialogue between movement and searching for the meaning that is emerging in the movement. Integrating parts to reveal a a greater whole, the process looks at how meaning emerges through the properties of complexity.

My understanding of "mind" that is emerging has less to do with conceptual landscapes and more to do with the possiblity of movement as an expression of the form and process of knowing. If meaning is continually emerging and iterating into greater forms of complexity, the form of meaning could be concieved of as a moving process that is consciousness itslef.

I am sorry that I didn’t get an opportunity to show my video sample in class. I will be doing a final showing of work sometime in early January in the dance building. I will let you all know the dates and times. I hope you can make it.

Cohen, Bonnie Bainbridge. Sensing, Feeling, and Action. Northhampton: Contact
Editions, 1993.

Strange Attractors: Chaos, Complexity, and the Art of Family Therapy

The book I will introduce to the class is Strange Attractors: Chaos, Complexity, and The Art of Family Therapy by Michael R. Butz, Linda L. Chamberlain, and William G. McCown

This book looks at the complexity of the family system and applies Chaos Theory to help therapist understand the dynamics of interpersonal bonds. Since families are open systems in a constant flux, the application of empirical data to unpredictable situation has resulted in failure to address positive therapy. In this book a nonlinear theoretical approach towards family therapy seems to be more effective in the family paradigm.

Family Therapy is a fairly new field in psychological treatment that was developed between the 1940's and 1960's. The book introduces five paradigms shifts that have been used in family therapy: double-bind, Cybernetic Theory, open systems with transformative states, autopoiesis, and most recently self-organization and Chaos Theory. The last example being the focus of the book.

Families...Complex Terrain
Here the book introduces strange attractors in patterns of family interactions. The pull of the strange attractors within a family system come into conflict between solitude–the focusing on oneself–and intimacy in the relationship–the focusing on the other. The fluctuation of love and fear make these elements the definitive strange attractor patterns in relationships for family therapists.

Instead of the normal phase space where the attractors lie, the book introduces “phrase space? which is the pattern of communication that establishes both problems and solutions in families. By mapping phrase space, a therapist is able to realize what is being and not being said in a relationship. It is a way for therapist to describe the family’s boundaries in terms of information. If the family system is unstable the therapist may be able to provide the particular information so that the family will be able to self-organize. Introducing a therapist to a family system is a step towards greater complexity and a way to establish a new strange attractor that can positively restore order in a chaotic family system.

Catching the Butterfly–Chaos in Therapy
Butterfly effect is used in Family Therapy to introduce a new piece of information that will stir the air where it is most stagnant in the family. Therapeutic observations of families are guided by the careful consideration of where the energy is directed in the system. Through the observation, the therapist should determine the types of patterns and try to introduce small perturbations to direct the family away from the attractors that are volatile in the relationship.

Fractals and Forks in the Road
Therapist use fractal models to help derive new treatment strategies for undifferentiated families.
Families described as rigid are lacking in flexibility and chaotic families are unstable because they are too open to change. Unless the family system experiences stress, the fractal generated by such families will be monotonous (sameness) with no bifurcation pointing towards change, thus equilibrium and stagnation set in. This inhibits the family from progressing forward in a positive manner and produces psychological disorders.

At the Turning Point
Here therapists determine whether they should use stabilization or destabilization treatment.
I found it interesting that intervention for dealing with families that are in crises, therapist would introduce more chaos to the already chaotic family system in order to help stabilize the family. The families would self-organize to deal with the chaos introduced and the families would emerge in a more stabilized system. Also the use of introducing new strange attractors helps to break negative patterns and enable the family’s system to bifurcate towards a positive stabilized system.

Some therapy methods of destabilizing
• Joining with under-represented family members
• Giving families tasks that they will fail in completing
• Empowering family members
• Prescribing specific behaviors to upset balance
• Forbidding specific behaviors
• Isolating scapegoats

Destabilization of the family system becomes unethical if there is reasonable probability that the family will cease treatment because the initial effects of these interventions cannot be predicted. Therapists have to be careful when introducing chaos/crises to family based on the sensitivity to the initial conditions that are observed by the therapist. The therapist as an observer must acknowledge their impact on the family (Heisenberg’s Uncertainty Principle), because just by them being the therapist, whether intervention or quiet observation, the family system has become more complex and more prone to influence.

The book is trying to convey chaos theory ideas in a way that therapist will move towards more theoretical techniques instead of empirical application when accessing families because of the levels of uncertainties. It has many case studies examples of chaos theory techniques therapists are applying towards the open organic system of families. It is written in a way that is easily understood after reading and discussing Chaos Theory within this class.

December 10, 2007

Ghost Hunters: William James and the Search for Scientific Proof of Life After Death

Ghost Hunters: William James and the Search for Scientific Proof of Life After Death
Deborah Blum, 2006, The Penguin Press

Deborah Blum, a Pulitzer Prize winner, is a professor of science journalism at the University of Wisconsin and has written about scientific research for publications such as the New York Times, Discover and many other publications. Her book is concerned with several prominent scientists, including renowned scientist William James, and his commitment to proving life after death. Blum explores the courageous actions of these scientists in both their response towards the Darwinist critics along the organized religious dogma facing them; they questioned if the current simplicity of the world being carved out, both from the Darwinist and Religious doctrine near the end of the 19th Century, was too simple in nature to explain the complexity of our reality and how we define life after death. These scientists became known as psychical researchers (psychic studies) and they collaborated for over a span of 30 years intrigued by a greater complexity that many where were unable to define in the scientific thinking of their time. We still play with the idea, yet have no 'concrete' proof for such conjectures today. This book explores not only the possibilities of a greater complexity we have yet to prove or understand, but offers an interesting historical telling of a group of renowned scientists who questioned Darwin and the emerging scientific community regarding their seemingly simplistic approach to the natural world.

A keen example of this is Alfred Russell Wallace, a British naturalist, explorer, geographer, anthropologist and biologist who coauthored the theory of evolution with Darwin. He was a prominent researcher in this movement. He said this about the significance of this psychical research and how understanding supernatural events could help demonstrate “the nature of life and intellect, on which physical science throws a very feeble and uncertain light.? (44) In the midst of Wallace's work, Darwin challenged him with these words regarding his psychical research: "I defy you to upset your own doctrine." (40)

In greater depth, William James, a pioneering American psychologist and philosopher at Harvard who wrote influential books on the young science of psychology (The Principles of Psychology), educational psychology, psychology of religious experience and mysticism, and the philosophy of pragmatism, said this about psychical research: “It seems to me that psychology is like Physics to Galileo’s time—not a single elementary law yet caught a glimpse of.? (168) This book explores James' lead role in the documentation of a possible existence outside our normal reality. Many of the explorers in this research community questioned through traditional scientific lens such as electromagnetic field theory and came to believe that there were underpinnings difficult to define with language and scientific certainty.

Many of the scientists involved did not follow any organized religious doctrine; in fact, many proposed that the same limitations that religion provided was the same limitations offered by the emerging traditional scientific theories of their time. Fredrick Myers said their work was "an endeavor to learn the actual truth as to the destiny of man" while Wallace argued that their serious scientific investigation "dealt with as constituting an essential portion of the phenomena of human nature." (33,35)

Though not a part of this research community, Thomas Edison shared his belief about the work of Jame's and other prominent scientists with a journalist: "Well, there you are. We do not understand. We cannot understand. We are too finite to understand. The really big things we cannot grasp as yet." (318)

Everard Feilding said society was "unwillingly children of the time in which they live." (321) It is here where scientific determinism becomes cemented in our collective consciousness. Blum, the author, expands the idea by stating that the people of James’s and Feilding's time "lived surrounded by new knowledge, inundated by facts; they were told absolutely that such information was the only route to certainty about the universe."

We seem to still be struggling with this same problematic approach to our thinking today.

I found this book relevant as an interesting juxtaposition to the readings, discussions, and presentations that we have explored with complexity theory. There were indeed an organized group of scientific intellectuals and researchers who not only questioned the dominant deterministic philosophy of their time, but also challenged the very doctrine of traditional religion. I find myself intrigued by such openly pragmatic inquiry. I think this allows for greater complex, adaptive thinking/consideration to enter into our conversations and our consciousness building. Many of the same themes resonate throughout the telling of this scientific movement. I find the same intuition and scientific courtesy present in the studies and words of today’s complexity theorists. Almost all of these 19th Century scientists ascribed to Darwin’s theory , but many questioned its reductionist simplicity; even Wallace, who helped coauthor the theory questioned its many limitations publicly during Darwin’s life. The book explores Darwin’s many responses along with the angst of the traditional scientific community in response to this unconventional scientific study. The topic (psychical research) today is still considered controversial at best.

To further intrigue you; here is a list of the many scientists, intellectuals and artists involved in this psychical scientific movement. Many of them committed their entire academic lives to prove the theory of life after death detailed in this book.

A sample from the book:

• Richard Hodgson: Philosophy
• TH Huxley— English biologist, physician & scientific scholar who coined the word agnostic. Huxley used the term 'agnostic' to describe his own views on religion, a term whose use has continued to the present day, and which throws light on his demanding criteria for proof in science
• Henry Sidgwick: English philosopher published Methods of Ethics out of Cambridge University.
• Who started the British Society of for Psychical Research in 1882? Frederic Myers-- a scholar, a poet of distinction and a psychologist and Edmund Gurney who wrote The Power of Sound (1880), an essay on the philosophy of music.
• William Fletcher Barrett-- he discovered Stalloy, a silicon-iron alloy used in electrical engineering and he was the first physicist to join the movement
• Famous folks who joined the British Society for Psychical Research included painters, clergymen, politicians, spiritualists, and writers: Alfred Lord Tennyson (Britain’s poet laureate), essayist & social critic John Ruskin, Rev. Charles L. Dodgson (pen name Lewis Carroll) who wrote Alice in Wonderland, and Samuel Clemens (pen name Mark Twain)
• Charles Richet—who won the Nobel Prize for Physiology and investigated neuro-chemistry, digestion, thermoregulation in homoeothermic animals, and breathing; he wrote para-scientific subjects, which dominated his late years, including Traité de Métapsychique ("Treaty of Metapsychics", 1922), Notre Sixième Sens ("Our Sixth Sense", 1928), L'Avenir et la Prémonition ("The Future and Premonition", 1931), La grande espérance ("The Great Hope", 1933).
• Julian Ochorowicz-- Polish philosopher, psychologist, poet, publicist and was a pioneer of empirical research in psychology
• Oliver Lodge-- was a physicist and writer involved in the development of the wireless telegraph; altogether, he wrote more than 40 books, about the afterlife, ether, relativity, and electromagnetic theory.

Enjoy, it is truly fascinating to see prominent thinkers question the limitations of Darwin’s thinking during its emerging phase. It is also interesting to read about the potential of a reality beyond our everyday experience during a time that was more open to such inquires of the supernatural workings of the world then compared today in the scientific community. It begs for us to consider that perhaps our view is tunneled and that we are unable to embrace, or perhaps in fear of, what it means if the whole is truly greater than the sum of its parts.

Understanding Complexity and Emergence

After reading “Critical Mass? by Philip Ball and delving into the mechanics of phase transitions, I wanted to explore the current academic discourse around the study of complex systems and emergence. To that end, I read several articles from the journal “Complexity? dating from 2002 through the most current 2007 issue.


Corning, Peter A. “The Re-emergence of ‘Emergence’: A Venerable Concept in Search of a Theory.? Complexity 7.6 (2002): 18-30.

Chu, Dominique, Roger Strand and Ragnar Fjelland. “Theories of Complexity: Common Denominators of Complex Systems.? Complexity 8.3 (2003) 19-30.

Klüver, Jürgen. “The Evolution of Social Geometry: Some Considerations about General Principles of the Evolution of Complex Systems.? Complexity 9.1 (2004) 13-22.

Hübler, Alfred W. “Understanding Complex Systems: Defining an Abstract Concept.? Complexity 12.5 (2007) 9-11.

Schuster, Peter. “A Beginning of the End of the Holism versus Reductionism Debate?? Complexity 13.1 (2007) 10-14.

Background on the authors:

Peter Schuster: is current editor of the “Complexity? journal and has been affiliated with the Santa Fe Institute. He is at a university in Austria in the field of theoretical chemistry.

Peter Corning: has served as the director of the non-profit Institute for the Study of Complex Systems and as a founding partner of a private consulting firm in Palo Alto California. His field is behavioral genetics.

Alfred Hubler: is at the Center for Complex Systems Research, Department of Physics, University of Illinois at Urbana-Champaign.

Jurgen Kluver: is a professor of Information Technologies and Educational Processes at the University Duisburg-Essen. His fields of research include: mathematical and computational sociology, theoretical sociology, sociology of science, theory of science.

Dominique Chu: is an academic fellow at the Computing Laboratory at the University of Kent in Canterbury. His interests include bio-inspired computing, computational biology/computational modeling of biological systems, molecular computation.

A recurring theme is that a combination of “reductionist? and “holistic? approaches is seen as necessary to understanding the phenomenon of emergence and the evolution of complex systems in nature. The term “reductionism? once meant an understanding of the “parts? of a system, while “holism? implied something almost mystical that could not truly be understood. These terms have taken on different meanings in the study of emergence. Peter Schuster makes these points succinctly in the most current issue, while Peter Corning’s 2002 article also makes a more detailed distinction between the two approaches:

Reductionist approach to understanding complex systems (also described as “systems science?):
1. Understood by studying interactions between particles
2. Systems are deterministic yet unpredictable
3. Particles are seen as having inherent tendency to self-organize
4. There is a search for fundamental laws that explain behavior (but underlying causal agency is not specified)
5. Treats emergence as an “epiphenomena? (resulting from interactions, but having no causal effect)
6. Explains “how? systems work

Among others, Corning labels the following “reductionist?: Barabási, Kauffmann, Holland, Buchanan

Presumably, the study of phase transitions and the modeling applications described in “Critical Mass? by Philip Ball would also be labeled reductionist in the sense that Corning describes it.

Holistic approach:
- System understood as multi-leveled. Causation is upward, downward, and horizontal.
- Effects may be co-determined by the context and the interactions between the whole and its environment
- Causation is iterative – synergistic effects of interactions are also causes of other effects
- New emergent properties arise at higher levels of organization
- Properties of the parts are modified, transformed, reshaped by their participation in the whole
- Organized, purposeful activity: instruction-driven as opposed to law-driven (e.g. genetic code)
- Historical
- Attempts to explain “why? evolution occurs

Corning labels the following “holistic?: Casti, Corning’s own “Synergism Hypothesis?

Kluver is looking for general principles that determine the laws of evolution, particularly in complex social systems (a reductionist approach, per Corning). He makes a helpful distinction between different types of system dynamics. This relates to my blog posting earlier in the semester about the different classes of computer models: (System Dynamics, Agent-based)

First-order dynamics: the kind of dynamics that a system exerts by changing its states but not changing its rules of interaction. (Example: boids)

Second-order dynamics: An adaptive system is characterized by second-order dynamics, i.e., a dynamics, that a system generates by changing its rules of interaction according to environmental demands. (Example: Forrester’s “System Dynamics?)

Third-order dynamics: combines both features of ?rst- and second-order dynamics: like ?rst-order dynamics it unfolds by its own logic; like second-order dynamics it takes environmental demands into account and changes its own rules of interaction; yet in addition it is also able to vary its own structural initial conditions, which started the whole process. A system capable of third-order dynamics can de?ne its own criteria of success and thereby change its environment. (Example: Agent-based modeling)

Chu and co-authors Strand and Fjelland discuss the intrinsic limitations of computer and mathematical modeling techniques. This seems to echo Corning’s discussion of reductionist and holistic approaches:
Real systems in nature have the property of “radical openness? resulting from the rich connections between real systems and their environment. In contrast, computer models are closed. In order to construct a workable model, scientists select a relatively small number of system elements deemed relevant and formalizes these elements into mathematical equations or computer code.

Thus, there is a family of overlapping possible and actually realized models. This is the source for what is called “contextuality?. A system is “contextual? if it includes one or more elements that also occur in a different system, or if it is itself a shared element between more than one system. In other system(s) the shared elements take part in causal processes different from those included in the original system. The property of contextuality is a consequence of the partitioning of the world into system and environment that precedes any modeling enterprise. Likewise, global models will not be contextual.

Hubler talks about the production of new knowledge in the study of complex systems. He is critical of current research, in that “because of the traditional preference for abstract work, abstract research results with very little experimental grounding are being published at an ever accelerating rate, whereas experimental work receives comparatively little attention and funding. This raises the question how much knowledge is being created by current complex systems research.? He is concerned that current research is not developing the network of concepts that is necessary to understand complex systems holistically. He concludes that a “practical? understanding of complex systems and practical applications of research are not likely to be developed in the near future.

The Fifth Discipline

The Fifth Discipline: The Art & Practice of The Learning Organization by Peter M. Senge. Doubleday. New York. 1994.

The author: Senge was the director of the Center for Organizational Learning at the MIT Sloan School of Management and is a senior lecturer for MIT. See for more, including other publications.

First lesson: Do your research before you go to the bookstore, buy the book, spend weeks reading it, only to discover that there is a newer “completely revised? edition.

Lessons from the book (the older edition):

What are the Five Disciplines?

Systems Thinking: a way of thinking that sees beyond individual patterns to the whole pattern of patterns, the whole of wholes.

Personal Mastery: “continually clarifying ..our personal vision, of focusing our energies, of developing our patience, and of seeing reality objectively.?

Mental Models: we all have them, the assumptions that are beneath our awareness, that drive our worldview and thus, our decisions.

Building Shared Vision: bringing people together toward a common future (rather than simply a goal).
Team Learning: the whole is smarter than the sum of the parts. Learning as a team becomes synergistic. The example here of a sports team that seems to play better than the sum of the individual talent on the team was illuminating.

Senge’s position is that being great (as an individual, an organization, a company) in any one or two of these disciplines will not mean success. Even “mastering? team learning, mental models, personal mastery, and shared vision will not mean success, at least not in the long term, because the problems we face today are those of complexity. He says there are two types of complexity: detail complexity which means many variables, and dynamic complexity which means that cause and effect are not close in time and space. He asserts that most of our organizations only know how to respond to effect that is closely linked to cause. If effect is too far down the time/space line we no longer see what caused it. Also, that many things are not linear at all (of course) but cause-effect-cause relationships.

The Fifth Discipline of Systems Thinking is offered as a language that organizations can use to (start to) talk about/expose what is really going on (“current reality?). This allows for models (paper or computer) to be developed that shed light on where the leverage point/s really exist. Then, using the “creative tension? between the “current reality? and the “shared vision? we can find creative solutions.
A Learning Organization is one that fosters personal mastery so that individuals continue to grow their own vision and can contribute that to a shared vision (which is not a majority rules, but an outgrowth of truly sharing the various visions which then build on one another). In order to grow that shared vision, the organization must examine its corporate culture mental models as well as individual’s mental models, holding them up to scrutiny in an environment that supports that risk, and discard those that no longer apply. This process can be part of the team learning process, because as the mental models are exposed and dealt with, the team can discover the underlying systems at work. With individuals committed to their own and each other’s growth, a common vision for the future (which continually examines itself), a team is born that can learn together using their common language of systems thinking to understand the current reality, the gap between that reality and the shared vision, and how to leverage their systems to create a new reality ever closer to their vision. (hopefully you see the disciplines in loops interacting with each other)

"Practicing" the disciplines is the emphasis. This may be frustrating to people who want/need a problem fixed right now. What is helpful is that at least they'll be working on the right problem. (For those who have familiarity with other "practices" like meditation, this emphasis will be familiar.)

If you haven’t read it, I recommend it, if just for Appendix 2: System Archetypes. That Chapter contains the systems pictures of every problem I’ve encountered be it in a corporate, governmental, non-profit, or personal setting. It gives the structure, the early warning signs, the business principle and some examples. This is a handy reference guide.

Having your colleagues read it will mean they won’t look at you so funny when you are talking about feedback loops.

PS. Happy I can post!

Adaptive agent Modeling in a Policy Context

Gulden, T. R. (2004). Adaptive Agent Modeling in a Policy Context. Unpublished Dissertation, University of Maryland, College Park.

This dissertation attempts to add to the empirical functionality of adaptive agent (or agent based) models for use in policy analysis. It makes some self-admittedly modest contributions to both theory and methodology.

Following Axtell (2000), Gulden claims that adaptive agent models may be useful in three distinct situations. Firstly, these models can be used to analyze systems which can be modeled with equations that are solvable. Examples of these systems abound in economics, when the neoclassical assumptions of perfect rationality, perfect information flow, decreasing returns, etc. are held to be true. Although adaptive agent models are obviously not necessary for modeling such systems, Gulden claims they may be useful in providing novel ways to structure the analytical issue and in allowing the modeler to relax assumptions that may not reflect the true dynamics of the system.

Secondly, adaptive agent models can be used to analyze systems that can be described by equations that are not easily solvable either analytically or numerically. Gulden says that "these include models with badly behaved equilibria, particularly models where the features of interest are not equilibrium states, but rather the fluctuations that the system goes through on its path to equilibrium." Analytical intractability in these systems may be due to the heterogenaity of agents, spatial dependence between agents within the system, or complex internal states of agents.

Thridly, adaptive agent models may be useful in analyzing systems for which formulating numeric equations are not analytically feasible and may not be theoretically productive. Such systems often feature spatial heterogenaity between agents and bounded rationality of agents.

During the course of his dissertation, Gulden applies an adaptive agent model to three different problems, each of which is typical of one of the three systems classes. The first problem that Gulden tackles is how the assumption of increasing returns may affect international trade policy. As a key feature of macroeconomic theory, international trade has long been integrated within macroeconomic models. However, this means that neoclassical assumptions have also been rigorously applied to trade theory. The assumption of decreasing returns, in particular, has led to the supremacy of free trade in todays global economy. Using adaptive agent modeling, Gulden is able to relax the assumption of decreasing returns in favor of increasing returns that have been observed in some national industries. He is confident that this model accurately handles increasing returns and imperfect capital mobility because the adaptive agent model produces the same results as the neoclassical model when neoclassical assumptions are given in the model's parameters. Under the assumptions of decreasing returns and perfect capital mobility, Ricardian trade theory predicts that nations will most efficiently produce the goods for which they have a comparative advantage and will trade for all other goods. However, the assumptions of increasing returns and imperfect capital mobility may result in a nation gaining a competitive advantage in an industry for which it is not particularly well suited and may keep producing this industry's goods, even when other nations could potentially produce the good more efficiently. Practically speaking, a developed nation with intensive capital investment may gain supremacy in an industry that may be better suited for a particular developing nation. Gulden argues that trade protectionism on the part of the developing nation would be preferable to free trade.

The second problem that is analyzed using an adaptive agent model is the observed Zipf distribution of city sizes in many nations. The Zipf distribution is a particular type of power-law described as rank/size. In the case of cities, this means that a city's size rank compared to all other cities in the nation-state is inversely proportional to its population compared to all other cities. The log-log distribution of these variables is linear with a slope of -1. The interesting thing about the Zipf distribution of cities in a nation is that researchers have been able to mathematically model this distribution, but not in a way that is theoretically meaningful. The general assumption is that cities assume a Zipf distribution as a result of the economic dynamics which disperse populations among cities. Gulden compares the Zipf-distributed cities of France and the United States, then applies the same model to the distribution of Russian cities which are not Zipf distributed. There is, in fact an overabundance of middle-sized cities compared to a Zipf distribution. Guldens is careful to affirm that a Zipf distribution is not normative but is positive. Different dynamics and different national objective s have merely resulted in different rank/size distributions between the US and France on one hand and Russia on the other. What is important is that Gulden's model is able to replicate the distributions of cities in all three countries fairly well, and that the parameters of the model which lead to these results are theoretically meaningful. The policy implications of these results are twofold. They give an insight into how city sizes may be distributed in Russia should the national political leaders ever choose to stop subsidizing population maintenance in the medium-sized cities. Understanding how this change would accord with the geographical dependency of economic markets could help leaders decide how to shift subsidies to develop infrastructure in certain cities. This model also interjects insight into trends in city distribution that we can continue to expect in developing countries that are continuing to urbanize. Countries that have an urban population that exceeds the number of cities necessary to maintain a Zipf distribution can expect the development of mega cities. Simply attempting to incentivize relocation to medium-sized cities will not work. If countries can expect the evolution of these mega cities, they will need to address potential deficiencies of critical infrastructure and other issues that are related to large populations and high population densities (e.g., pollution).

The final problem that Gulden tackles with an adaptive agent model is spatial and temporal patterns of armed conflict. According to Gulden, "much of the existing literature examining quantitative aspects of civil violence concentrates on risk factors and and searches for correlation between these factors and various indicators of violence." This type of analysis is obviously limited in its ability to analyze the nuances of conflict. The strength of an adaptive agent model in this context is that it has the power to analyze the internal dynamics of conflict. Gulden applies this model to a detailed set of data from the Guatemalan civil conflict that was compiled throughout the conflict's duration (1960-1996). Because conflict dynamics are so complex, Gulden's analysis does not form a comprehensive explanation for the conflict; he seeks, rather to demonstrate the appropriateness of this methodology for analyzing armed conflict. In his model, Gulden uses a ten-year subset of the data from 1977-1986. The violence in Guatemala was mostly purely civil in its nature, but some of the violence was genocidal. When the genocide killings were disaggregated from the broader civil conflict, The killings in the broader conflict followed a Zipf distribution. This disaggregation is theoretically justifiable because different dynamics underlie these two types of violence. The model that Gulden employed was developed by scholars at the Brookings Institute. Although the model is broad, it does show some success at depicting the dynamics of conflict. The important policy implications are that modeling conflicts can help decision makers know which areas would benefit from peace keeper presence and which areas would require other interventions.

Regarding the Zipf distributions within the city and conflict data, Gulden is careful to point out again that there is nothing inherently normative about the Zipf. Some scholars have asserted that the presence of a power law in itself is proof of complex behavior, but Gulden claims that the presence of this particular distribution only indicates that large incidents will be very large, and small incidents will be very small. This is because "a Zipf distribution can, in general terms, be produced by a phenomenon which balances positive feedback (making large events larger) and negative feedback (keeping most events small)." Thus, a Zipf distribution tells us little beyond the broad dynamics of a system.
In conclusion, Gulden says that the adaptive agent approach is especially appropriate for (and I quote verbatim):
• Modeling path dependent processes where the history of the system matters. (Particularly relevant in
the chapter on trade)
• Modeling individual based processes where the heterogeneity of actors matters. (Particularly relevant in
the chapter on civil violence)
• Modeling situations where bounded rationality and imperfect information are fundamental to the
process under study. (Particularly relevant in the chapters on cities and civil violence)
• Managing conserved quantities. (Relevant in all three cases)
• Examining distributional impacts of changes in process or policy. (Relevant in all three cases)

Most importantly, in expanding the analytical tools afforded to researchers, adaptive agent modeling "[expand] the way that problems can be conceived." Because traditional econometrics forces researchers to restrict their formulation of a model, they severely limit how researchers are able to define the problem. Adaptive agent modeling "allows for a richer pre-analytic vision which takes account of history, social organization, and human diversity." In the social sciences, this may represent a huge conceptual leap. Because social scientists are very rarely able to conduct randomized, controlled experiments, their traditional methodology is largely concerned with controlling for the bias that is thus introduced. By expanding their analytical toolbox, researchers may be able to achieve a far more complete view of social phenomenon. This could lead to a far more comprehensive range of policy responses.


Axtell, Robert. (2000). Why Agents? On the Varied Motivations for Agent Computing in the Social
Sciences, CSED Working Paper No. 17.

Epstein, Joshua M., John D. Steinbruner, Miles T. Parker 2001. “Modeling Civil Violence: An Agent-Based
Computational Approach.? Brookings Institution Center on Social and Economic Dynamics Working
Paper No. 20.

Complexity Theory & Political Science

After searching academic journals for articles combining complexity theory and political science, I propose the following five as randomly representative.

• Geyer, Robert R. “Globalization, Complexity, and the Future of Scandinavian Exceptionalism.? Governance (London), vol. 16, no. 4, pp 559-576, October 2003

• Ma, Shu-Yun. “Political Science at the Edge of Chaos? The Paradigmatic Implications of Historical Institutionalism.? International Political Science Review , vol. 28, no. 1, pp. 57-78, January 2007.

• Hoffmann, Matthew J. and John Riley Jr. “The Science of Political Science: Linearity or Complexity in Designing Social Inquiry.? New Political Science, vol. 24, no. 2, pp. 303-320, June 2002.

• Feder, Stanley A. “Forecasting for Policy Making in the Post-Cold War Period.? Annual Review of Political Science, vol. 5, pp. 111-125, 2002.

• Brunk, Gregory G. “Why Do Societies Collapse? A Theory Based on Self-Organized Criticality.? Journal of Theoretical Politics, vol. 14, no. 2, pp. 195-230, April 2002.

(Note: for a quick read, skip to the “Summary and Conclusions? section at the end).

Each of these articles argues that complexity theory is necessary to political science. This is because traditional linear thinking has proven inadequate to explain complex systems, among which are human political organizations. Typically, linear models are deterministic and reductionist, attempting to order the world according to a single set of universal principles. Such were the prevalent visions of social order in the Cold War, communism and capitalism, and “it was the pursuit of these extreme forms of order that brought about extreme forms of human suffering? (Geyer). Unexpectedly, to “devotees of the linear model,? Scandinavian countries have thrived by incorporating mixed elements of both these supposedly inconsistent models, remaining flexible in their response to the demands of a global economy, and avoiding “neat, orderly, and universalistic conclusions? (Geyer).

Moreover, analysis of past national security and foreign policy decisions of the United States government indicate that they have suffered from the linear nature of single-outcome forecasting and “a prejudice toward continuity of previous trends? (Feder). The desire for certainty in the complex system of international relations has not, in effect, reduced uncertainty; but has “only increased the margins of surprise? (Feder). This has been counterproductive, because the basic value of a forecast in the context of foreign policy is not that it accurately predicts the future, but that it can “keep us from being surprised? (Feder). To do this, it must provide a survey of several possible outcomes, together with leading indicators for each, rather than prediction of a single outcome. Non-linear models show promise in being able to do this. “As the inputs are varied in plausible ways, the models indicate which outcomes are possible and which are impossible? (Feder). It is a case of becoming familiar with the properties of the system rather than focusing on finding the one input which will produce the desired outcome. By using models to “examine ‘what if’ scenarios, one can develop a sense [i.e. intuition] of which changes in the political environment will have a significant effect on a particular issue? (Feder). In the early 1980’s the planners at Royal Dutch/Shell Oil considered the possibility that radical changes in the Soviet Union could cause the price of oil to fall. “They sought evidence that such an event [i.e. societal collapse] was possible and found it. Shell’s insight came from ‘asking the right question. From having to consider more than one scenario.’ (Schwartz)? (Feder).

On a more theoretical level, complexity theory includes several concepts which better explain political behavior than strictly linear models. Two of these concepts are path dependence and the economics of increasing returns. At bottom, they are very similar concepts, for they both posit that political systems are like autocatalytic sets: their “outcomes at critical junctures trigger feedback mechanisms that reinforce the recurrence of a particular pattern into the future? and “once a social process has started, it will produce its own law of inertia . . “ (Ma). They thus explain things like political momentum; why success feeds upon success; why, for example, American presidential candidates are so eager to start off well in the early state primaries. They controvert the linear notion that the outcomes of a system are always proportional to its inputs, for they consider the catalytic properties of the system, rather than just its inputs. Thus, given a critical juncture of a human system, even a trivial cause can have a large effect (the “butterfly? effect). This idea is central to all five articles. Moreover, the idea is problematic for political scientists doing traditional, linear, cause-and-effect analysis. For when all inputs, both small and large, may prove equally efficacious in a complex system, how do you decide which ones are important? Only by mastering all the complex relations which may exist among different nodes of a human network, could you be able to predict outcomes. It is this realization which requires the acceptance of uncertainty and the rejection of reductionist determinism as unrealistic.

Another complexity concept which may prove applicable to political phenomena is self-organizing criticality (SOC). This is introduced to help answer the question of why human societies have collapsed throughout history. Noting that traditional political science has failed to discover a linear law to explain the phenomenon, Brunk imports SOC from the physical sciences (where it has many applications). One appropriate SOC metaphor is Per Bak’s power-law sandpile. As each grain of sand falls on the pile, the pile’s complexity increases, to the point at which it becomes hypersensitive to even the smallest of shocks. At this point, dropping another grain on the pile results in a partial or complete collapse of the pile. Thus do human societies, as they always tend to become more complex by adding nodes and dependency relationships, tend toward the point whereby a small shock can result in collapse of the whole. The size of the reaction is “not caused by the size of outside shocks, but by how shocks are transmitted within a system as complexity cascades? (Brunk). Just such reactions were the First World War (supposedly triggered by the assassination of a single man, Austria’s Archduke Ferdinand), the 1929 stock-market crash, the Great Chicago Fire, etc. It is because societies have become adept at “dampening? their sensitivity to complexity cascades, by such techniques as FDIC insurance of American banks, river levies, cartel price agreements, etc., that civilization has been able to advance; but these efforts are often too weak or ill-designed to hold back the onslaught of chaos. “Wars, like forest fires, are SOC processes. . . . Unless the fundamental rules that govern the behavior of such a system change, it s only a matter of time before a catastrophic war destroys any given nation-state? (Brunk). However, the author bails-out of the darkest fatalism by adding that “there is not enough empirical data on wars to directly examine these patterns.?

Granting the general superiority of non-linear models over linear models in the context of political science, questions remain as to how to advance our understanding of political systems using this new paradigm. “If complexity theory is to be more than a metaphor or a critique of the Newtonian method for political scientists, then it must facilitate the articulation of a research program, as succinct and as accessible as the traditional scientific approach? (Hoffmann). One suggestion is that political scientists begin to gauge the probability of events, “such as the likelihood of war, by systematically looking at the effects of initial conditions and small changes. . . . The analytic ‘trick’ is to identify points in a system whereby disturbances can have exponential effects on the direction of the system? (Hoffmann). In this context, Hoffmann and Riley review the work of two complexity theory pioneers, Robert Jervis and Robert Axelrod. Unfortunately, they find that Jervis “lends little advice on how to incorporate system thinking into [political scientists’] work.? They compare his thinking to a “conceptual jailbreak,? implying its main effect is to separate us from the old worldview, without providing the means of establishing a new program. Their review of Axelrod’s agent-based models is in a similar vein, indicating that they are “artificial and, by design, simple.? While most of them are “exceptionally interesting and powerful illustrations of fundamental processes, they do not analyze real political phenomena.? The authors are far from despair over this situation. They simply suggest that an empirical gap is still to be bridged between computer simulation and real phenomena.

For Feder, all complex system scenarios must be valued, without assigning degrees of probability to each. Degrees of probability tend to focus the mind on the single outcome with the greatest degree of probability; whereas in a complex system, each possible outcome must be considered. In addition, the gap between models and real phenomena is for him reflected in the necessity of asking the right questions: “analytic methods alone will not guarantee that policy makers and academics will not be surprised by political events. Preventing surprise depends on asking the right questions…? (Feder). So, non-linear models, by themselves, are no substitute for real-world experience. They must be combined with that experience by asking the right questions of them.

Brunk’s argument rests mainly on analogy, on the better fit of the SOC model to societal dynamics than linear models; but he also proposes a change in methodology, decrying the narrow specialization of traditional political scientists. Rather, “a holistic approach is sometimes needed, because some processes only emerge at the system level. . . they cannot be discovered by examining individual events, no matter how intently they are studied. . . The generic, but non-deterministic stochastic pattern of a SOC system always repeats in a general way, but never repeats in exactly the same way. In other words, while its general contours can be described, it is not deterministic in its individual events? (Brunk).

Summary & Conclusions

My goal in reading these five articles was to get a sense of how complexity theory has impacted political science. The impact is significant, on levels practical, theoretical and methodological. It seems that political science was ripe for this kind of impact, due to its historical inability to both establish itself as a true science and to resolve some of its fundamental problems. First, its impact can be felt on the practical level, where linear models have proven to have disastrous political consequences. Such have been the rigid, linear models of capitalism and communism, with their failure to predict the success of systems intermediate to these two types (the Scandinavian societies) and to predict such events as the sudden collapse of the Soviet Union. Second, it has had effects at the theoretical level, where ideas such as path dependence, the economics of increasing returns, and self-organizing criticality, each add a level of understanding to the study of complex systems which seems outside the scope of linear thinking. For example, it would seem difficult to understand the dynamics of political movements without recourse to the first two of these ideas; and it is tempting to explain the phenomena of societal collapse in terms of self-organized criticality, although that idea needs more rigorous development to be persuasive. Third, it has had an impact on political science methodology, in the recognition that understanding human organization requires more attention to the properties of system process and less to cause-and-effect analysis. Thus, 1) familiarity with a system’s points of bifurcation, 2) familiarity with the range of its possible outcomes, and 3) identification of its behavioral patterns, rather than the details of any one scenario, are all touted as techniques appropriate to the non-linear approach. In this respect, it seems that a highly inductive attitude is required by the new paradigm, even to the point of favoring unconscious intuition over conscious ratiocination. Whether this new approach satisfies Hoffmann and Riley’s requirement that the new paradigm “facilitate the articulation of a research program, as succinct and as accessible as the traditional scientific approach? is an open question.

An interesting divergence among the five articles is Geyer and Brunk’s differing perspectives on war. Whereas Geyer indicates that non-linear thinking will make for more flexibility of thought among human societies, and thus less conflict, Brunk treats war as a natural consequence (a “complexity cascade?) of self-organizing systems. In his view, it is only by societies’ ability to dampen down their tendency to organize to points of criticality that war may be avoided, and he is noncommittal as to how they can do that. Moreover, he indicates that many such attempts to dampen criticality are ill-designed and ineffective, even leading to an increase of criticality. So his is a decidedly more pessimistic view than Geyer’s.

Chaos, Complexity, Curriculum, and Culture: A Conversation

A summary of: Chaos, Complexity, Curriculum, and Culture: A Conversation (various authors-citation below)

The book is set out as a series of four iterations intended to first provide some basic information and then spin that information through a richer set of lenses moving toward applications of chaos and complexity within learning environments.

The portions of the book that lay a foundation in complexity and chaos tread ground that we have covered well. The sense of movement toward a new science and away from Decartes is clear and names like Robert May (who is interviewed), Prigogine and the many others we have seen in the books are here as well.

However, the new material in this book is an attempt to blend chaos and complexity with existing theories of learning and cognition. As an example, the construct of a dissipative structure that reflects the emergence of order at points of instability has parallels to the work of Piaget. Jean Piaget is often maligned and misrepresented as just another stage theorist psychologist. In fact, most of his work examined “how children come to know? science. His observations on a process he called “equilibration? reflect the characteristics of dissipative structures. Piaget felt that “learning perturbations? lead to changes in cognitive structure,s leading to the accommodation of new knowledge. These structures resemble the dissipative structures of Prigogine. Piaget’s student Seymour Papert continued this work in his design of the “Microworlds? learning environment of the computer language LOGO.

While much of the book relates complexity and chaos to learning very nicely, sometimes chapters become too pragmatic. A promising chapter on emergence and classroom dynamics merely provides anecdotes about a specific classroom, using it to make statements of a more global nature. There is little research in this area but there certainly are authors writing in this domain that could have been brought to the discussion.

A restatement of a common definition of teaching from the book is- support of the student’s handling of increasing levels of complexity through communication and environmental design. There is considerable argument over what is more likely to increase learning, complexification of the material or chunking of the complex material in an effort to lower the cognitive load. The former is proposed as further distancing learning from reductionism and allowing for emergent and recursive structures to form. The latter is the domain of the cognitive scientist evaluating brain function. The cognitive scientists are well ahead in the research battle at this point.

A theme across many chapters is that of the classroom as an interactive (complex adaptive) system. In a traditional classroom the roles and avenues of communication are fixed and the possible structures formed by the system are limited and controlled. It is argued that a complexity/ systems aware classroom allows for increasing levels of complexity and encourages the emergence of new structures by reducing the hierarchy created when most or all of the learning is focused through the teacher. The suggested ways to accomplish this vary even within this book. Writers suggestions range from those supporting a completely holistic and autopoetic approach to those suggesting a more intentional design that holds the teacher still accountable for the design and dynamic modification of the learning setting over time. As an aside, we once had a visitied fellow from Apple computer (Alan Kay) stop by one of the alternative schools I taught at. He suggested we walk through the building and see how many of the rooms were set up with the teacher standing firmly between the students and the main technology in the room (the whiteboard) thus establishing a fixed and disabling node through which all accredited knowledge passed.

Finally, the book spends some time examining possible directions for education research within the paradigm of complexity. It is suggested that many of the current research programs continue to be positivistic and reductionist in nature and that we now know that this mostly serves to either oversimplify the analysis of the situation or to oversimplify the experimental condition. Either approach produces results that are of little use in the real (read complex) classroom. One of the authors suggests that this is the reason that so much educational research produces results that are obvious to teachers.

In all, the book was uneven but stayed true to its subtitle, reading like a series of conversations. A recommended read for those in education settings wanting to hear about cantor sets and autopoeisis applied to cognition and learning.

Doll, William E., M. Jayne Fleener, Donna Trueit, and John St. Julien (Eds.). (2005).
Chaos, complexity, curriculum and culture: A conversation. New York: Lang.

December 9, 2007

The Edge of Organization - Chaos and Complexity Theories of Formal Social Systems

My summary is from a book called The Edge of Organization; Chaos and Complexity Theories of Formal Social Systems by Russ Marion.

The book begins by debating the technical meaning of the terms Chaos and Complexity – and their fields of influence. Many argue that chaos theory is a general theory of nonlinear dynamics and complexity theory is a subset of chaos. Another school of thought is that the two are two sides of the same issue.

The author maintains that the two share a general nonlinear premise, yet they represent different phenomena. Chaos theory tends to focus on systems in which nonlineararity is intense and mechanical – weather systems, or fluid turbulence or soil percolation. Such systems respond sensitively to, and magnify minute differences in initial conditions, thus they are unpredictable. Chaotic systems are mathematically deterministic but their descriptive equations cannot be solved. Complexity theory layers chaos theory on top of more traditional theories of stability, but the result is a unique theory in its own right. A complex system is more stable and predictable than are chaotic systems – it borders on the state of chaos; it possesses sufficient stability to carry memories and sufficient dynamism to process that information. This balance between order and chaos enables the ability to reproduce, to change in an orderly fashion, and to self-organize, or emerge without outside intervention. Complexity theory is useful for describing biological phenomena such as evolution, ecological niches and social processes.

Marion supports the notion that complexity theory comes down on the side of increasing returns. As we have discussed in class, Brian Arthur argues that economic systems are self-reinforcing systems and can be better modeled by increasing returns than diminishing returns. Resources are not randomly distributed in a population, as traditional economics theory would have us believe; rather they condense about systems that already have a resource base. Factories are more likely to move to areas that already have resources from which they can draw; Japanese success with electronics begets more Japanese success with electronics.

Marion goes on to review the four reasons provided by Arthur for such behavior:
1. The cost of setting up an operation commits an organization to continue performing in its current mode.
2. Proficiency with new technology typically comes at the expense of a long learning curve; one that can be avoided by sticking with the current technology.
3. Related industries stand to lose if a focal industry changes its technology, and will consequently resist such change.
4. There is often an expectation or belief that the prevailing output will dominate the future, thus a reluctance to try something different.

The author spends some time on structural contingency theory – which argues that an efficient organization is one that has been properly tuned to environmental contingencies. He goes on to say that if an organizational environment is unstable, organizational structure must be flexible – leaders and workers must be able to adapt on the fly and to make ad hoc decision. Another way to reduce the impact of an unstable environment involves what is referred to as organizational slack – any reserve that is maintained to deal with contingency. It is a stockpile of physical, human, structural, organizational, and managerial resources.

The book also reinforces the notion that change is a power law distribution. It is the product of outside force and simple causes. While the occurrence of change is a random, often unpredictable event, its intensity distribution isn’t random – rather there is a power law order to the process. Change is controlled by complex interactive forces. Marion status that power law distribution is a footprint, a clue, left behind by the edge of chaos. It indicates the presence of a system that is fit but active, one that resists change but that daily subjects itself to the possibility of major change.

Kauffman’s At Home in the Universe is summarized in Marion’s book. He reiterates that flatter social organizations are better; that decentralized decision-making increases organizational fitness, and that hierarchical authority is unable to maximize organizational effectiveness.

Kauffman approaches the social structure as patchwork quilts. Some quilts are simply one large patch, some have many small patches and others have a moderate number of patches. Each patch acts to achieve its own self-interest, even at the expense of other patches. The quilt as a whole is seeking a compromised state of fitness among its patches that maximizes the fitness of all patches. Quilts that are composed of many small patches attempt to coordinate too many conflicting constraints. There are too many possible combinations of wants and needs to find effective compromise, and fitness is trapped by too many small peaks. Kauffman calls this the “Leftist Italian? phenomenon, after the many competing political parties in Italy.

Marion concludes in the same way he opens the book – with questions:

Why did the USSR collapse so suddenly in 1989?
How did the stock market manage to nose-dive in 1987?
Why is it so difficult to implement our 5-year strategic plans as they were designed to be implemented?
Why do organizations sometimes make bad decisions when issues are far from ambiguous?

The answers can be found in phase transitions, Arthur’s increasing returns, Kauffman’s co-evolutionary simulations and fitness landscapes, and in Lorenz’s butterfly effect. Social life is stable but dynamic, and balances itself on the brink of chaos.

I found not much remarkable about this book - rather just support for most of the theories and thoughts discussed and reviewed in class. I did find the patchwork quilt analogy of Kauffman useful - and would not have been exposed to the theory without reading this book, or Kauffman's book for that matter.

December 8, 2007

cell theory

I found this liver pathologist writing about cell theory and complexity.