CAS are made up of a population of discreet elements - be they water molecules, ants, neurons, etc. - that nonetheless behave as a group. Regardless of the specificities of the system, CAS are alway comprised of such parallel Agents. To study their behaviors, we can conceptualize these as simple entities that come with 'pre-set' Rule Based behaviors, the activation of which is predicated upon interactions with surrounding agents. Investigations into such 'automata' informed the research of early computer scientists, including Von Neumann, Wolfram, Conway, and Epstein and Axtell. Simulations using Cellular Automata (CA) or Agent-Based Models (ABM) aimed to discover if stable patterns of global agent behaviors might emerge through interactions carried out over multiple iterations at the local level. These experiments successfully demonstrated how order does emerge through simple agent rules (see Conway's 'Game of Life' video in the feed on the right).
CAS are thus described as Bottom-up since higher levels of global order arise based upon simple interactions at the lower local level. Once these novel global features have manifested they stabilize - spurring a recursive loop that alters the environment within which the agents operate, and containing subsequent system evolution. Maturana and Varella's notion of autopoiesis as well as Hermann Haken's concept of Enslaved State outlines these dynamics, whereby order emerges stochastically but then stabilizes and self-maintains. Here, concepts developed in Cybernetics thinking leave their mark on CAS, as Feedback is critical in maintaining emergent properties.
Although CA and ABM demonstrate emergent dynamics, they are somewhat limited in that rules are generally static and established in advance. A richer exploration of agents in CAS examines the ways in which bottom-up agents might independently evolve rules in response to feedback. Here agents test various Rule Based schema over the course of multiple iterations. Through this trial and error process, involving Time/Iterations, they are able to assess their success through Feedback and retain useful patterns that increase Fitness. John Holland describes how these agents, each independently exploring suitable schema, actions, or rules, can be viewed as adopting General Darwinian processes involving Adaption, Evolution, + Rules to carry out 'search' algorithms. In order of this search to proceed in a viable manner, agents need to possess what Ross Ashby dubs Requisite Variety: enough heterogeneity to test multiple scenarios and thereby increase the likelihood that fit rules or will be discovered.