Tuesday, April 15, 2008

Monsters & Studio Reviews

Monsters

Monads are a tight family, infinitesimal differences in their qualities – no discontinuities allowed – You wanna be a MONAD! - conform, hold hands, share a little of yourself with those you touch – no room for Monsters here, this is blood brothers.


The Studio Review Algorithm

Input:
Result presented by Wannabe Architect

Studio Review Algorithm operation:
1. Take Wannabe Architect’s Result
2. Demean It
3. Select randomly from all remaining Possible Answers; if none remain goto Step 8
4. Instruct Wannabe Architect to rework accordingly by next Review
5. Schedule next Review for 4 days or less
5. Remove Answer from Possible Answer Set
6. If no Answers remain, invite Jury to next Review; declare it the Final Review.
7. Goto Step 1 within 4 days
8. Speak into your cell phone and leave the room

Monday, April 14, 2008

what if/then what

Considering the connection between algorithm and topology, I am drawn to the idea raised in our last discussion regarding the fundamental nature of the if/then operation. In reading an essay about topology, I came across the Jordan Curve Theorem, which I think has an interesting correlation with the if/then algorithmic operation in both its function and simplicity.

A simple closed curve (one that does not intersect itself) is drawn in a plane. This curve "C" divides the domain into two domains, an inside "A" and an outside "B." Even if the plane and the curve are deformed, these two classes persist and force any curve traversing A to B to cross C, regardless of the deformations.

Within the topological theorem, there are discrete operations concerning discrete elements (in this case, geometrical forms). While a curve may be reducible to a set of points, this operation is contingent on this curve being irreducible. It is essential to this particular function, and therefore it is discrete. While I do not intend to deny the divisibility test for discreteness, I do think this speaks to the nature of discrete in terms of algorithm.

For instance, the if/then statement, as part of a rule set, functions in a similar fashion as the curve in the Jordan Curve Theorem. It acts as a dividing line that creates, or forces the emergence of, two distinct states that are both contingent on the single statement.

Furthermore, the sequence of rules (rule set) can be discrete when the the operation of another algorithm is contingent upon the entirety of this rule set. Von Neumann speaks to this in his discussion of self-reproduction in the General and Logical Theory of Automata.

If we look at the nature of the algorithm through the lens of "topology," does the algorithm change if the order of rules are re-organized to yield different results? Is the Turing machine "topologically" consistent even if the order of operations are varied based on inputs?

While I think the idea of an infinite rule set is logically sound, I think it should be appended to reflect the necessity for recursion in this infinite string. This would suggest genetic instructions akin to evolution, rather than a sprawling sequence that simply doesn't end.

Thursday, April 10, 2008

Lect 13

Continuing the discussion of the discrete. Keep in mind that the background is to rethink the concept of computation in algorithm vis a vis Turing's paper among other things. We've looked at analogies with architecture and other fields and now we're looking at his thought experiment discretely, that is in terms of one aspect of what his thought experiment requires which is the discrete terms of what is the output, 0 and 1. The blog entries were terrific because they approach that style of thinking.
We want that grammatical grasp of our concepts and the flexibility of thought to see it redefine our architectural ones. On algorithm and geometry our points suggested no internal connection save in a few instances. Algorithm expanded into any notion of operation means geometry is constructed from operations but not defined by them but by geometrical axioms. Geometry as an organization specific to matter is a language by which to characterize it but doesn't describe the process of formation. Topology, on the other hand might be more internally connected. There is always an order to the execution of the steps and they are related to each other in specific fundamental ways.

We were asking what is discrete. Today the elements, objects, entities, etc which remain constant (a variable can be one of these I suppose) upon which the rule set acts. Q: is the space in which a sequence run an "element"? We possibly reduced this with the fruit shopping analogy to an if/then condition. And this might seem on the whole more to do with toplogy. It certainly brings us back to the problem of logic.
On a few definitions:
What is a rule space as opposed to a rule set?
Can there be an infinite rule set?
Can the terms of an algorithm magically change? (Can 0 just become 1)? In contrast to: Can the rules or rule ste change?

Are we looking at anything like Descartes' invention of analytic geometry? (I mean the axiomatization of geometry? The application of algebraic procedures to geometrical terms?)

What is discrete?
1) Are Rule Sets Discrete
2) Are Objects, elements, terms and entities discrete? (That is, those things upon which the rule set acts)
3) Is a step in the carrying out of the rule set discrete?
4) Is the if/then conditional discrete?

One thing to keep in mind is that computation has changed the relation between geometry and toplogy - but this is perhaps a feature of analytical computing.

Another thing to consider is whether algorithm and toplogy are possible more connected than algorithm and geometry.

In other words, algorithm might act on geometrical elements but in order for it to act geometry isn't essential to its function. The if/then and the question of loops and so forth seem to suggest an internal connection with topology.

In terms of continuous, we mean possibly divisible. Do we mean infinitely divisible?
Design Office for Research and Architecture
155 Remsen Street
No. 1
Brooklyn, NY 11201
USA
646-575-2287
info@labdora.com

Wednesday, April 2, 2008

On algorithms and boats

We have to do something to celebrate the seminar. I suggest renting a sailboat. Any ideas. Please be discrete.
Design Office for Research and Architecture
155 Remsen Street
No. 1
Brooklyn, NY 11201
USA
646-575-2287
info@labdora.com

Tuesday, April 1, 2008

Lect 12

Today we were going to look at a few problems about the discrete and the continuous regarding the chu and delanda text. Chu is obviously a proponent of the discrete as the basis for everything. Delanda is really a proponent of the continuous. Both are talking about genetic, that is, algorithnic systems. I'll get to that later. You're blog entries for this week were to discuss 1) the relationship between the algorithm and geometry and 2) to identify in any narrative form something about the discrete and the continuous. William offered a theory somewhat close to Newton's that where there is matter there is geometry but that algorithm as such isn't really connected. That is, it isn't an essence. One might want to ask what we mean about geometry as inhering in matter. It is after all a formal language. Bill offered an insight into the notion that algorithms are only always operations on geometry. But it raised an interesting point that although there may be axioms for geometry we can't say that, yet, for algorithm since it is not a formalized structure as such and so althought there may be infinite ways inwhich to construct a triangle there are limits to the definition of what is a triangle. Well, let's say this is a provocative idea. Frank suggested that in a way every schema of the algorithm has imbedded in it certain principles but that, surprisingly, these were topological if anything. Not geometrical. At a certain point we also discussed the question again of experience. I would like to edit my comments about that since I felt a bit rushed at the end of class to say something but I'm not at all satisfied with what I said. In any event, I would return to my previous claim that the experience of a system, say of language, is not the same as an explantion or even a definition of language. To say that we operate with rules is not yet to say whether and how we experience them and whether we intend them when we express certain things. If I give you a basice set of instructions to add 2 to the next number and you get to the 50th operation but come up with an odd number, for whatever reason, it is hard to say that I intended for you to carry it out such that you always get an even number after 50. This is something from Wittgenstein.
There were also provocative notions on the snowflake: geometry can describe it but can't account for the process by which it formed. And the question also of the operations of the coin toss when the coin is a sphere - the contrast between the discrete and the continuous.

I'll try to say more about this later. To return to chu and delanda, let me just emphasize two things: for chu. Material systems are themselves products of algorithm. Algorithm doesn't derive from dynamical systems. What does this mean? For Delanda, populational, intensive and topological thinking are essential to genetic algorithms. Why does he exclude the discrete? Finally, why in Leibniz are there no such thing as monsters?
Design Office for Research and Architecture
155 Remsen Street
No. 1
Brooklyn, NY 11201
USA
646-575-2287
info@labdora.com

Fw: Summary lect 10

Design Office for Research and Architecture
155 Remsen Street
No. 1
Brooklyn, NY 11201
USA
646-575-2287
info@labdora.com

-----Original Message-----
From: "Peter Macapia" <peter.dora@tmo.blackberry.net>

Date: Sun, 30 Mar 2008 08:27:26
To:"Peter Macapia" <petermacapia@labdora.com>
Subject: Summary lect 10


Conflict, Problem, Question
We looked at two problems, one relating to Turing's invention as a thought experiment - how it led to computation and the other about algorithm and geometry. Instead of looking at Turing's problem as a problem we could relate in its entirety to architecture I was asking if we could just understand something of the implications of his use of a binary system, or of the discreteness of something being 1 or 0. And so I was asking in what sense can we think of cases of the discrete. And we looked at mathematical and other types of discrete, integers vs irrational. And then I asked us to consider continuity. Discrete, it seems is the basis of computation, of the algorithm. It is a whole unto itself. Frank suggested at first that there is a problem of representation here, of symbolization. But we soon came to the issue of the fact that the discrete in its essence doesn't really require that. It is a structural and formal property, not one of symbolization.
Design Office for Research and Architecture
155 Remsen Street
No. 1
Brooklyn, NY 11201
USA
646-575-2287
info@labdora.com

Geometry and algorithm

Geometry in some way is a phenomenon of spatial and structural organization of matters as a result of system optimization. For instance, soap bubbles or air bubbles (under water) are perfectly spherical in shape if there's no uneven disturbance. We all know well that the spheres are the result of the surface tension of the continuous soap membrane /water surrounding the air bubble in the latter example which tends to minimize the surface area.The hexagonal pattern in honeycomb is the other good example of optimized structural and spatial organization of materials for bees to build homes. Basically, all naturally emerged geometries are the most effective form for matters to exist and for dynamic systems to operate. Ineffective geometry emerges since we learn how to manipulate system willfully. That is when natural systems become artificial. Despite the invention of ineffective geometry by us, many other effective geometries are created by artificial processes such as alloy.

It is invalid to say all algorithms are related to geometry. Because there are algorithms dealing with formless tasks. However, because of the computational nature of algorithmic operations, most of them can always be expressed in geometrical language e.g. plotting graphs with numerical data. Algorithm is artitifical manipulations of matters and systems through systematic undertaking of procedures which are believed to have specific effects in specific context. The primitive idea about algorithm is methods by which people can get things done. In geometrical discussion, algorithm must be involved in creation or alteration of geometry based on the notion of algorithm as methods.
Let’s say we’re all university professors for a moment and we’re trying to figure out how to influence how the kids in our classroom are going to interact.

Let’s first try a seating chart:

Let’s assume for a moment that there are two major tendencies in the classroom: talkers and sleepers and that talkers have the ability to excite the students in the seats around them, while sleepers have the ability to dampen talkers into a little lecture nap session. The interactions are limited to the students directly surrounding the one under consideration. That map looks like this:

We can expect to change the outcome of the class by altering the initial condition.

Ok, let’s do some science on these kids. The point Peter made last week is that if we have a discrete set of inputs (512), we can generate a continuous set of outputs (length of class).

Let’s try turning this on its head. Scrap the seating chart. Sleepers and talkers mill about at will and interact as they please. The space of possible initial conditions for this kind of interaction is beyond what my modern computer can compute for beyond 23 students

– in other words it’s virtually continuous. The interactions over the course of class produce a discrete experience that might be somehow captured in numbers. If we can’t even thoroughly describe what might happen first, we certainly cannot say that if sally comes to class first, then it would follow that… We might begin to operate on the class by setting up rules, but the rules can’t be given by the possible combinations of student interactions, again the possible space is too large to quantify. The rules then become behavioral heuristics; such as if Sally isn’t acting cool, then don’t talk to Sally, rather than the possible combinations of black and white tiles. This doesn’t seem far from Game of Life, although I was thinking of more complex simulations, like the racial preference study.

The conclusion I draw from this is that this might be a way to classify these different programs capable of generating complexity. Do all of them have the same organs in different interrelationships?

Relative Discretion, Relative Complexity

In “Probable Geometries” Greg Lynn defines the joint concepts of discrete and continuous using Jackob Bernoulli’s 1713 description of discrete probability. Using the statistical analogy of a coin toss, Bernoulli states that a discrete system would include a coin, whose symmetry [and geometric form] permit the speculator to state in advance that the coin will almost always land on one of its two sides and thus almost always come to rest in a position that reveals only either a “head” or a “tail”. A continuous system, according to Lynn and Bernoulli, includes a coin which, due to certain deformations in its geometry, does not permit the speculator to state the possible positions in which the object might fall into rest in advance. Therefore, an experimental series must be performed to establish the frequency [or probability] of these possible positions.

To construct one such experiment, take a coin, a nickel as we know it today—Jefferson on one side, his home always his opposite, and inflate it from its centroid to form a perfect sphere. Jefferson and Monticello are deformed equally into two hemispheres connected along their equator by a seam—previously the sides of the original nickel cylinder—now similar to the make-up of a red rubber bouncy ball, less the red, rubber, bouncy. The nickel, once approximately the diameter of a marble, now, in sphere form, closer to the size of a bee-bee; as no alchemy has been performed here. No additional molecules have been added to the nickel in order to bring it to its new shape; both the original and the new nickel are equal in weight, mass, density and the like. The first penny when tossed performs as Bernoulli suspects while the second now rolls until finding a perfectly flat position on the ground and reveals, when viewed orthogonally in plan, most of Jefferson’s ponytail, “Liberty”, the mint date, along with “Unum”, the right third of Montec-E-L-L-O, the word “America” and the smooth shiny seam holding the two hemispheres together. The probability that the nickel will land to reveal exactly the same image again once in the next one-hundred tries is slim at best.

As a system we can understand Bernoulli’s analogy conceptually in architecture and in literature and in most other arts and soft sciences. As a physicist, however, I imagine that both of our nickels are either both discrete and/or both continuous. Both of the nickels are made up of atoms and are probably divisible as such; economically each is still divisible into 5 pennies; what is important, though, is that the difference between one and the other is their distinct formal qualities. Yet taste, smell, listen and other typical analyses all produce the conclusion that each is qualitatively the same. Neither is more or less continuous than the other. Yet Bernoulli, and Lynn, and myself continue to believe in the coin toss analogy. This has to do with the notion of relative discretion. Each system must be able to define, under its own pretense, its own discretion. Under this law a toroid may be considered discrete if it is the primitive geometry from which the system begins. Until acted on by forces; fluid, dynamic, or otherwise, the toroid will remain discrete and may serve as a constant against which all post-operative geometries within the same experiment may be measured. In this way the coffee cup may be considered continuous although the two geometries may contain the same number of vertices, edges, faces, etcetera.

In each of these models an operation is performed on the discrete to produce the continuous.

It would seem also that the notion of relative discretion may pertain to the algorithm itself. We have discussed at length the complexity produced by such simple rule based algorithms as cellular automata and Turing’s universal machine. Specific conversations in class have debated the actual complexity of the cellular automaton. During one particular session the argument was made that in each the CA and the UM, because the rules are stated before the experiment is begun, it would be conceivable that the result of each may be predicted with some certainty. The opposite side of the debate acknowledged that, while indeed the rules are known from the start, the aggregation of these simple rules cannot be predicted and that the only sure way to realize the results would be to run the algorithm. Again, statistically this must be true, although the complexity of the rule set will always have a direct relationship with the odds of predicting the outcome. The coin, for instance, must be tossed in order for us to know the result with 100% certainty. Before the coin is tossed, though, the speculator is given 50% odds. The CA must be predictable to some degree of certainty in the same way.


An analogy can be made to the game of blackjack which comes with its own set of rules. Similar to the rules of a cellular automaton the rules of blackjack are established at the start. These rules are largely Boolean—hit or stand—with some added complexity in split, double, and bust. Given three cards, the two in the speculators hand and the one dealer card shown, a single response is expected. The speculator’s hand includes an ace and a nine while the dealer shows a two—the speculator is expected to stand. The speculator has two eights while the dealer shows a nine—the speculator is expected to double his or her wager and hit. Although the chart would indicate that blackjack is significantly more complex than the coin toss, the pit boss knows that blackjack odds are always 42%. Tipped only slightly in his or her favor. Before each card is dealt the speculator knows what to do with each new circumstance and exactly what those new circumstances might be; the ability to count cards, where he or she is seated at the table, the size of the deck and other advantages may even help to bring these 42% odds closer to the 50% available at the coin toss. As discretion is relative so too is complexity and 42% can hardly be considered complex. This is not to say that complexity is not available in blackjack. It does suggest that the complexity must come from a secondary influence or a second level algorithm. Studying a full black jack table you will notice that the response of a speculator immediately to the right of the dealer will directly impact the game of the speculator to his or her right and so on around the table. It is not uncommon for tempers to flare if one speculator offers a response not specified by the given rule set. Such a response will spike the flow of the next player’s game, resulting in him or her receiving the card before or after the card they would have been dealt had the game been carried out by a computer alone. The possibility for this type of random operator asserts an open-endedness to the system and allows for the unpredictable result.

Similarly, Lynn uses the term incomplete to describe the random section method use by both Le Corbusier and Rem Koolhaas at Maison Citrohan and Tres Grande Bibliotheque, respectively, to produce probable geometries. In "Strategizing the Void" Koolhaas uses the terms regular and irregular.

“Imagine a building consisting of regular and irregular spaces, where the most important parts of the building consist of an absence of building. The regular is the storage, the irregular is the reading rooms, not designed, simply carved out. Could this formulation liberate us from the sad mode of simulating invention?”

Here a discrete system is established by regular floor plates and structural organizations and an algorithm is deployed for networking the system vertically through elevators and escalators. While the system is technically made continuous in this operation alone it is still relatively un-complex. A secondary, random operator, is then run through the system to “excavate where efficient”—thus producing the irregular.

A similar process can also be seen in Tschumi which he describes as Space/Event (Movement) and tests at Parc de la Villete. He includes the term movement but always secondarily and often removes it completely to say Event Space. A more indexical description of his theory would reorder the terms, space-movement-event. Here a discrete system is established with the construction of space. The movement of the user then activates the space to produce the event.

Knowns, Unknowns, and GPS

“…as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know."
-Donald Rumsfeld February 12 2002


It can be difficult to make high stress decisions because like Donald Rumsfeld we know there are knowns and unknowns. The algorithm articulates this by reflecting a series of choices like the branches of a tree with each decision starting at the same place but traveling its own path down one of many possible branches. The troubling issue is all the branches you can not travel down which may possible yield better results than the one you traveled. In considering the unknowns vastly outnumber the knowns it also looks to be almost impossible to make a good decision in terms of the statistical possiblies. This tree of possible choices poses a geometry to decision making as defined by it’s relation to the algorithm. Yet this proposes a model of decision making which is incompatible with the linear formation of time which always places one thing before or after another. The realization of how multiple situations could exist at this point of time makes time’s linearity seem tangible as any one of a series of possible outcomes could happen if the process was repeated enough. Time changes from a linear process to a recursive one within this model of understanding and it’s repetition becomes a way of abstracting a new truth or meaning through the algorithm and it’s resulting tree geometry. Time becomes nothing more than another variable with the larger decision process of the algorithm which is some foreign to the human condition.
I think this idea is present in this interview of Paul Virilio with John Armitage :

John Armitage: How do these developments relate to Global Positioning Systems (GPS)? For example, in The Art of the Motor (1995 [1993]), you were very interested in the relationship between globalisation, physical space, and the phenomenon of virtual spaces, positioning, or, 'delocalization'. In what ways, if any, do you think that militarized GPS played a 'delocalizing' role in the war in Kosovo?
Paul Virilio: GPS not only played a large and delocalizing role in the war in Kosovo but is increasingly playing a role in social life. For instance, it was the GPS that directed the planes, the missiles and the bombs to localised targets in Kosovo. But may I remind you that the bombs that were dropped by the B-2 plane on the Chinese embassy ? or at least that is what we were told ? were GPS bombs. And the B-2 flew in from the US. However, GPS are everywhere. They are in cars. They were even in the half-tracks that, initially at least, were going to make the ground invasion in Kosovo possible. Yet, for all the sophistication of GPS, there still remain numerous problems with their use. The most obvious problem in this context is the problem of landmines. For example, when the French troops went into Kosovo they were told that they were going to enter in half-tracks, over the open fields. But their leaders had forgotten about the landmines. And this was a major problem because, these days, landmines are no longer localized. They are launched via tubes and distributed haphazardly over the territory. As a result, one cannot remove them after the war because one cannot find them! And yet the ability to detect such landmines, especially in a global war of movement, is absolutely crucial. Thus, for the US, GPS are a form of sovereignty! It is hardly surprising, then, that the EU has proposed its own GPS in order to be able to localise and to compete with the American GPS. As I have said before, sovereignty no longer resides in the territory itself, but in the control of the territory. And localization is an inherent part of that territorial control. As I pointed out in The Art of the Motor and elsewhere, from now on we need two watches: a wristwatch to tell us what time it is and a GPS watch to tell us what space it is!
http://www.ctheory.net/articles.aspx?id=132 March 31 2008

It seems that both space and time both become variables within some larger series of decisions which determine who is in control? Are their landmines here? and will this place be bombed in five minutes? It also becomes of critical importance in how the algorithm is defined as whatever value system the algorithm identifies becomes the system by which space or geometry, time and their related decisions will be rationalized.

Geometry, Algorithm and Identical Snowflakes

‘’Several factors affect snowflake formation. Temperature, air currents, and humidity all influence shape and size. Dirt and dust particles can get mixed up in the water and affect crystal weight and durability. The dirt particles make the snowflake heavier, and can cause cracks and breaks in the crystal and make it easier to melt. Snowflake formation is a dynamic process. A snowflake may encounter many different environmental conditions, sometimes melting it, sometimes causing growth, always changing its structure’’. Geometry and geometry rules are what we use to describe, ‘dismantle’ the snowflake. If we were to describe the process of its formation, geometry wouldn’t help.

Geometry is what we use to describe its form but geometry has almost nothing to say about the way it is created, and formed. The formation of a snowflake is such a multi-parametric, non-linear process –an algorithm- that no matter how many times it is run, never will two identical snowflakes will be formed. We may consider two snowflakes as identical, maybe because our eyes can read/recognize/identify only major differences in shapes, disregarding the slight differences. But snowflakes are in fact always non-identical as they have varying precise number of water molecules and ‘’spin of electrons, isotope abundance of hydrogen and oxygen’’ [I ‘ve no idea what these are; I found it on a chemistry website].