Archive for the ‘DeLanda’ Category
The world is, for Deleuze, an intertwined series of layer of networks within networks. Which is not to say that Deleuze says all of this in anything like a systematic fashion. But all of this is present, if in the margins, of Deleuze’s texts. All of which produces the basis upon which it becomes possible to put Deleuze’s work into discourse with the contemporary manifestations of the science of networks. For in fact, Deleuze was not here to see the internet, nor the more developed forms of globalized capital, nor networked approaches to the mind and artificial intelligence. And while his networkology of events is in fact central to his metaphysics, it fascinates me that few beyond DeLanda have truly pursued this approach to Deleuze before.
The concept of emergence – which I define as the (diachronic) construction of functional structures in complex systems that achieve a (synchronic) focus of systematic behaviour as they constrain the behaviour of individual components – plays a crucial role in debates in philosophical reflection on science as a whole (the question of reductionism) as well as in the fields of biology (the status of the organism), social science (the practical subject), and cognitive science (the cognitive subject). In this essay I examine how the philosophy of Deleuze and that of Deleuze and Guattari can help us see some of the most important implications of the debate on the status of the organism, as well as prepare the ground for a discussion of the practical and cognitive subject.
All of what follows depends on accepting the strong case put forth in DeLanda that Deleuze’s project in Difference and Repetition and The Logic of Sense – continued in the collaborative works of DG – establishes the ontology of a world able to yield the results forthcoming in complexity theory. In terms I will explain further below, complexity theory models material systems using the techniques of nonlinear dynamics, which, by means of showing the topological features of manifolds (the distribution of ‘singularities’) affecting a series of trajectories in a phase space, reveals the patterns (shown by ‘attractors’ in the models), thresholds (‘bifurcators’ in the models), and the necessary intensity of triggers (events that move systems to a threshold activating a pattern) of these systems. By showing the spontaneous appearance of indicators of patterns and thresholds in the models of the behaviour of complex systems, complexity theory enables us to think material systems in terms of their powers of immanent self-organization.
In this groundbreaking new book, Manuel DeLanda analyzes all the different genres of simulation (from cellular automata and genetic algorithms to neural nets and multi-agent systems) as a means to conceptualize the possibility spaces associated with causal (and other) capacities. Simulations allow us to stage actual interactions among a population of agents and to observe the emergent wholes that result from those interactions. Simulations have become as important as mathematical models in theoretical science. As computer power and memory have become cheaper they have migrated to the desktop, where they now play the role that small-scale experiments used to play. A philosophical examination of the epistemology of simulations is needed to cement this new role, underlining the consequences that simulations may have for materialist philosophy itself. This remarkably clear philosophical discussion of a rapidly growing field, from a thinker at the forefront of research at the interface of science and the humanities, is a must-read for anyone interested in the philosophy of technology and the philosophy of science at all levels.
One of the most significant epistemological events in recent years is the growing importance of historical questions in the ongoing reconceptualization of the hard sciences. I believe it is not an exaggeration to say that in the last two or three decades, history has almost completely infiltrated physics, chemistry and biology. It is true that nineteenth century thermodynamics had already introduced an arrow of time into physics, and hence the idea of irreversible historical processes. It is also true that the theory of evolution had already shown that animals and plants were not embodiments of eternal essences but piecemeal historical constructions, slow accumulations of adaptive traits cemented together via reproductive isolation. However, the classical versions of these two theories incorporated a rather weak notion of history into their conceptual machinery: both thermodynamics and Darwinism admitted only one possible historical outcome, the reaching of thermal equilibrium or of the fittest design. In both cases, once this point was reached, historical processes ceased to count. For these theories, optimal design or optimal distribution of energy represented, in a sense, an end of history.
This paper will explore how the social ontology of Gilles Deleuze, as recently summed up by Manuel DeLanda, can be used in the context of economic sociology. In particular, the text will study the divergences (as well as similarities) between Deleuzians such as DeLanda and ActorNetwork theorists such as Michel Callon and Bruno Latour.
The text starts off from the concept of ‘assemblage’ (agencement), using it as a point of departure for sketching the differences between the two strains of thought. Whereas the concept of assemblage is often used by ANTinspired writers to loosely denote ‘hybrid collectives’ under constant reconfiguration, in particular in the context of economic agency, Deleuze’s original use of the term is more specific (featuring a number of special properties), and at the same time more generic (used to describe wide variety of entities).
Theorists have devoted more interest to questions of “the virtual” recently. This is due, in part, to growing familiarity with the scientific concepts necessary to its interrogation, as well as the philosophical writings of Gilles Deleuze and those of philosophers he has resurrected, such as Spinoza and Bergson. But this interest is also the result of growing dissatisfaction with current theoretical approaches that rely on “top-down” methods unable to effectively account for the emergence or mutation of systems. Manuel DeLanda, for instance, has referred in his writing to oversimplifications that attribute causes to posited systems such as “late capitalism” without describing the causal interaction of their parts, which would change in different contexts. In his introduction to Parables for the Virtual, Brian Massumi argues that cultural theory’s over-reliance on ideological accounts of subject-formation and coding has resulted in “gridlock,” as the processes that produce subjects disappear in critiques that position bodies on a grid of oppositions (male-female, gay-straight, etc.). In one of his more exceptional examples, Massumi argues that Ronald Reagan’s success as the “Great Communicator” was not due to his mastery of image-based politics to hypnotize an unwitting public. The opposite was the case. Reagan’s halting speech and jerky movements were the source of his power, the infinite interruptions in his delivery so many moments of indeterminacy or virtual potential that were later made determinate by specific receiving apparatuses, such as families and churches. In short, interactions among non-ideological parts produced ideological power. Critiques that consider only the ends of ideology are unable to examine the very processes that create constraining subject-formations in the first place.
In this groundbreaking new book, Manuel Delanda analyzes all the different genres of simulation (from cellular automata and genetic algorithms to neural nets and multi-agent systems) as a means to conceptualize the possibility spaces associated with causal (and other) capacities. Simulations allow us to stage actual interactions among a population of agents and to observe the emergent wholes that result from those interactions.
Simulations have become as important as mathematical models in theoretical science. As computer power and memory have become cheaper they have migrated to the desktop, where they now play the role that small-scale experiments used to play. A philosophical examination of the epistemology of simulations is needed to cement this new role, underlining the consequences that simulations may have for materialist philosophy itself.
Idealists have it easy. Their reality is uniformly populated by appearances or phenomena, structured by linguistic representations or social conventions, so they can feel safe to engage in metaphysical speculation knowing that the contents of their world have been settled in advance. Realists, on the other hand, are committed to assert the autonomy of reality from the human mind, but then must struggle to define what inhabits that reality.