|
CALCULATED
CINEMA |
|
|
|
Eugeni Bonet |
SP |
|
... for several years now,
I've been using the simplest system of all - the binary system - instead
of the decimal one, in other words, progression two by two because
I've found that it helps perfect the science of numbers. I only use
zero and one as numeric signs and, on reaching two, I start again.
G.W. LEIBNIZ, 1703 |
|
From the very start, the computer or digital calculator
was much more than just a mathematical machine. As soon as data or information
could be translated by a binary system of zeros and ones, their ability
to perform high speed, complex calculations went far beyond a mere arithmetical
function. So, information could be verified, combined and processed through
logical operations and in keeping with the principles of universal knowledge
(stored and updated in the form of memory, programs, databases, etc.).
Computers are considered machines of certainty, reason
and knowledge. This aspiration to mechanical knowledge was foretold by
people like Llull and Leibniz in the idea of a combinatoria universalis
whereby logic blends with the idea of mission, or theological inspiration,
by way of the systematic diagramming of syllogisms, and through the binary
code as a compilation of the entire system of culturally infused dualities:
all/nothing, light/dark, day/night, life/death, body/soul, presence/absence,
true/false, left/right, vertical/horizontal, masculine/feminine, thesis/antithesis,
etc.
In mid 19th century, George Boole formulated his "laws of thought"
which were translated into algebra and the operators that now bear his
name. In the following century, Norbert Wiener laid the foundations of
cybernetics (from Greek kybernetike: the art of steering) as a science
of organic control and communication systems, irrespective of the physical
nature of the integrating organs, whether they be of a biological or artificial
nature; his disciple, Claude Shannon, quantified the arithmetical unit
of information and called it a bit (a contraction of binary digit); John
von Neuman studied the brain as a cybernetic model capable of being transferred
to the artificial brain of electronic machines, and established several
principles for programming logic and computer architecture.
Another aspect that had been foreseen by certain visionaries and pioneers
dealt with the multiple functions and versatility of the machines they
imagined. At the end of the 19th century, the philosopher Charles Babbage
designed a machine called the Analytical Engine
- his previous Difference Engine was designed rather more as an arithmetical
assistant - and his dream was shared by Lady Ada Lovelace, who boasted
the combined genes of poetic imagination and mathematical knowledge, inherited
by her famous parents: Lord Byron and "the Princess of Parallelograms",
as the latter called his wife, Annabella, (subsequently, in 1979 and as
a sign of homage, Ada was the name chosen for a programming language developed
by the US Department of Defence).
Babbage's mechanical design claimed to incorporate the element of "steering"
and the memorisation of instructions via perforated cards similar to those
used by Joseph Marie Jacquard in his automatic loom. "The Analytical
Engine weaves algebraic designs like Jacquard's machine weaves flowers
and leaves", recounted Lady Ada. Other testimonies that have reached
us indicate that these romantic forerunners speculated about other uses
that a hypothetical machine could have apart from arithmetic calculation.
These included games of chance and intellect, and applications for musical
composition and ornamental drawing.
Almost one century later, Alan Turing, who was later involved in the generation
of the first "real" computers, described the principles of another
hypothetical and unique machine whose only function was of a rather more
philosophical nature. But, besides posing questions that are still valid
regarding the intelligence of machines and the axiomatic fundamentalism
of symbolic logic, Turing was also ahead of his time when he suggested
one of the simplest theoretical models - based on yes
and no, one and zero decisions - for a universal machine capable
of emulating any other type of machine. What subsequently has become known
as a metamedium: a medium (without being one in itself) which, according
to instructions received, can simulate other existing media, even one
that has no imaginable physical incarnation.
Electronic calculators were first used - several diverse and very peculiar
devices appeared on the scene in the 40s - mainly in military and administrative
areas to facilitate tasks such as decoding, labour censuses and statistics,
and the mechanisation of office work in general. Nowadays, it is impossible
for us to recognise in the cumbersome installations of those primitive
machines, where the machine stopped and where the furniture started, or
where stood the threshold between the automated functions and the populous
personnel who entered instructions and extracted data. But that is where
office automation began, the business machine, though the motives pertaining
to military strategy, defence and national supremacy can also be clearly
recognised in the drive behind many technologies (without looking any
further, think about the most immediate origins of the Internet and the
techniques of electronic simulation and virtual reality).
The incorporation of a display monitor or "graphic terminal"
came about some time later - the Whirlwind system included one, which
was developed in 1953 at the Massachusetts Institute of Technology (MIT)
- which employed the cathode ray tube commonly used in televisions, as
well as radar and oscilloscopes. Thus, the electronic brains acquired
a face (or an interface, since it was also a matter of establishing a
friendly dialogue) and limbs (peripherals). And, even though the computer
at that time was still a "monstrous" cabinet (or several joined
together), that is, a complex, highly costly mainframe, its applications
began to spread to civilian sectors through research laboratories, businesses,
universities and academic institutions. (Although the prodigious chip
came into being in 1958, the invasion of microelectronics did not affect
information technology until twenty years later.)
In the 50s and more so in the 60s, the horizons for
binary machinery opened up to art and design, education, scientific display,
visual and audio-visual creation, music and literature, sculpture and
architecture, choreography and... the inchoate world between the lines
that is developing along with the growing trend towards shared media applications
in art and technology. The advent of new apparatus or peripheral instruments,
whether interface or output devices - optical pens, plotters, scanners,
film recorders, etc., the development of programmes and systems (dedicated
to specific applications) and the initial explorations of interactivity
and hypertext - began to force the formation of a new frontline in art
and aesthetics. People began to talk about Computer art or cybernetic
art and information, or generative aesthetics.
The first exhibits were held in 1965 at the Studio
Galerie of the Technische Hochschule in Stuttgart and the Howard Wise
Gallery in New York, and were distinguished by a peculiar feature, quite
unusual at that time: the authors of the works on show were mainly scientists,
mathematicians or engineers, though some straddled the fence
as technologist-artists (or vice versa). However, it was not so much a
circumstance as a fact that would be repeated and asserted in subsequent
exhibits, among which deserves special mention the show titled Cybernetic
Serendipity: The Computer and the Arts (1968-69), curated by Jasia Reichardt,
as does the book, Computer Graphics/Computer Art (1971), by Herbert W.
Franke. (A fait accompli as valid today as ever, especially in view of
the new 90's revival of "technological art". Yet there are still
several suspicious and cautionary questions that remain unresolved; some
of the vainest being those based upon the painful perception that this
medium trespasses and dissipates artistic identity.)
In 1967, A. Michael Noll, one of the first to promote
the convergence between the computer and art, spoke about the benefits
provided by the technical/scientific community via the artistic exploration
of new technologies ("what artists can learn by using these new computer
techniques may be valuable for scientists and engineers") as well
as the ways in which art circles themselves could profit by them. Consequently,
this concern is reflected in Noll's work, and that of his peers, through
hypotheses or models for automating the creation of optical, geometric,
kinetic, dynamic, psychedelic and stereoscopic works of art. A primary
benefit could be found, for example, in reducing the labour-intensive
side of the process involved in such delicate maneuvering. It is therefore
significant to find that some of Noll's initial explorations were spent
recreating the emblematic works by Mondrian through computer techniques,
of Op-art (Bridget Riley) and motion sculpture, among other trends that
were fashionable at the time.
In other words, a sort of perfect accord was reached
between the still very rudimentary aptitudes of computers for processing
images and sound in binary mode - data whose use requires huge amounts
of memory capacity and complex instructions - and a type of analytical
reductionism towards which new avant-garde art tended: from an abstract
geometrical posit whose greatest ascendants can be found in Cubism and
Constructivism, through the new approaches oriented towards concept and
procedure, to language codes and sign systems, and the "ABCs"
of things like Minimalism and monomorphic structures (as per George Maciunas).
The language managed by the initial graphic and animation systems and
programs was therefore a relatively simple one of abscissas and co-ordinates,
dots and lines, curves and sinusoids, planes and isometrics, mosaics and
symmetries, arabesques and geometrical forms, trajectories and displacements,
permutations and stochastic factors, measurements and insertions, limited
grey-scales or colours.... These were the Sketchpad (Ivan Sutherland,
1962), BEFLIX and EXPLOR (Kenneth C. Knowlton, 1963 and 1970), Model 5.3
(John Stehura, 1965), GRAF (John P. Citron, 1966), GRASS (Thomas De Fanti,
1971), MSGEN (Nestor Burtnyk and Marceli Wein, 1971), among other pioneering
contributions by individuals, groups and research laboratories, universities
and technology institutes, and of course large corporations and emerging
industries. All of this was geographically and institutionally concentrated
- as a new technology which reached its peak and greatest degree of development
- in the most opulent nations.
From a technological perspective, failing to make
any further references about the computer's evolution and its first manifestations
during this period would be like getting stuck in prehistory. Thirty years
hence, our culture is thoroughly impregnated by the subsequent advances
of digital machinery and the sequellae that other early explorations have
produced, which are equally remote in time, or even possibly more so.
Take, for example, the hypertext systems for accessing related data in
a non-linear way (from Memex described by Vannevar Bush, in 1945, to the
Xanadu project by Theodor Nelson, in 1965), or the virtual reality and
sensorial telepresence techniques (from the Sensorama Simulator patented
by Morton Heilig, in 1962, to the first "immersion helmet" tested
by Ivan Sutherland, in 1966). As a result, art and aesthetic theory today
have to confront other challenges, horizons and dilemmas which, in part,
cannot be inscribed in any one particular discipline.
In any event, here we are trying to consider the first point of confluence
between machines, film-makers and computers, which constitute, in some
ways, the two greatest poles of reference in 20th century cultural history.
At the beginning and end of that same century, both poles have encouraged
the idea that art, science and technology ought to converge, since they
have several features in common: a meticulous (pre)history, an impact
on society, and a variety of cultural uses and offshoots. And, although
it may apparently seem as yet a limited episode of just over two decades
(60s/70s), with relatively few authors and works displaying aesthetic
consistency, the Calculated Cinema series intends to open a window of
observation allowing a glance both backwards in time, as well as forwards.
The subject will be examined through context-specific frameworks, fleeing
from any type of linear sequencing, whose point of departure can be found
mainly within the traditional areas of visual music and non-objective
art. Eventually, the idea is to offer an antidote for the digital animation
styles which are now hegemonic and omnipresent.
Every artist definitely has some type of dialogue
with his tools and his means of expression; whether brushes, paint and
paper, a video synthesiser, or a computer. My tools are mathematics and
programming, and the computer is the medium I use. In this sense the computer
adds a new dimension to this field of exploration started by Ginna and
Corra in 1912, the Italian Futurists to whom the first abstract films
are attributed. They spoke of 20th century dynamism. Today we talk about
mathematics.
LARRY CUBA, 1986
The concept of the calculated image has been used on occasion as a reference
to the series of techniques in digital image generation, animation and
processing, and in particular those "constructed" by means of
arithmetical calculations and computer tools (starting from scratch, so
to speak, or the combination of zeros and ones). To speak of calculated
cinema is not, therefore, a stylistic figuring. Instead, the aim is to
venture a concept that goes beyond the typical taxonomies used in the
history of film-making theory, as a means of introducing other perspectives.
On the one hand, it refers us back to the beginning, to the pioneers of
images created and animated by the computer, and, in a wider sense, to
other early explorations in the principles of automation or machine-assisted
creation. These explorations mainly - though not exclusively - took place
in the domains of non-objective animation. In other words, they were abstract
images. Others, however, preferred to call them absolute or concrete images.
However, we can also go back to the beginnings of metric montage - also
called arithmetical - theorised by Eisenstein and other Soviet film-makers,
in which a frame was used a unit of calculation. In some ways, these methods
transfer musical concepts of rhythm, tempo, harmony, measurement, interval,
tone, etc. These ideas and syncretisms have fascinated numerous visual
and audio-visual artists, and were particularly "hot" in the
first third of the century alongside a certain exaltation of science and
technology. These methods have subsequently been re-used in theory and
practise for metric cinema by the Viennese Peter Kubelka - "the Webern
of film", in Stan Brakhage's words - and subsequently by other moving
picture experimentalists .
An artist, pedagogue and expert in these fields, Malcolm Le Grice - author
of a major work of reference: Abstract Film and Beyond (1977) - pointed
out, reiterated and asserted the links that can be established between
the underlying proposals of abstract and experimental
film in general, and today's practises and explorations with the new digital
media. So, the non-linearity of a type of film that is described
as a narrative or anti-narrative foresaw more than one aspect of the properties
inherent to technologies based on the principal of direct or random access.
The premonition of interactivity could be found, on the other hand, in
rejecting passivity insofar as the work's reception and closure. And,
in short, this anti-standardised, searching spirit explodes in the idea
of expanded cinema, to take an expression employed by several film-makers
and theoreticians, and used in the title of a book by Gene Youngblood,
Expanded Cinema (1970), one of the first to discuss the new technologies,
audio-visual practises and multimedia.
This programme series, therefore, revolves around the initial explorations
of computer machinery (and programming), and the recycling and DIY use
of electronic components in generating images. And above all, it attempts
to establish the relationship between diverse works and aesthetics (and
techniques), which are occasionally very distant when reduced to a chronological
criteria. These links go beyond a recurring presence that can be found
in some ethnic and ritual music, in a graphic universe full of mandalas
and arabesques or in a certain reductionist essentialism. It is not by
chance that many of these authors state that they owe their work to or
are followers of the initial avant-garde explorations of aesthetic, syncretic
and technical aspects such as absolute abstraction, the concept of visual
music, the syntax of montage, real-time animation and technological invention.
This is the primary reason for including some of the following titles,
interspersed throughout the series: works by Oskar Fischinger, László
Moholy-Nagy, Kurt Schwerdtfeger, Alexandre Alexeieff / Claire Parker and
Norman McLaren which, beyond their intercalation as diachronic references
or techno-aesthetic counterpoints, serve as a reference point for several
constants that can be found not only in the output of this particular
generation, but also in some of the most recent works. Thus, the programme
insinuates that neither technology nor the machine language of information
is at the true heart of the matter... even though the growing intermarriage,
and even competition, between digital technologies and art (and, of course,
the film industry) gives good cause for raising several subjects that
are all too often overlooked, even though their impact can be seen and
appreciated in "cult" works.
The accelerated pace at which the computer's graphic potential is evolving,
whereby digital animation systems are growing ever more sophisticated
and several hegemonic styles are being adopted - between hyper-realism
and 3-D cartoons - means that the series is doomed to adopting a virtually
archaeological approach, although in no way does this render impertinent
the languages and universes they seek to express; precisely because technology
is just another element - though not merely accidental - among the considerations
that have guided artistic research into the electronic and digital media.
Just like in the field of music, where there is renewed interest and curiosity
in old-fashioned electro-acoustic procedures, and instruments like the
Theremin and other unique artefacts, many of these films reveal the extent
to which technological imagination often foreran major brands and industrial
launches.
Initial intuition about the potential for machine-generated graphics and
animation dates back to the period between the 40s and 60s, by way of
the cathode ray tube. Artists and film-makers like Karl Otto Götz,
Ben F. Laposky, Norman McLaren, Hy Hirsh, Mary Ellen Bute, Jordan Belson,
Nam June Paik and Alexandre Vitkine used radar screens, oscilloscopes
and television sets to electrically model abstract forms, relatively simple
motion, geometrical grids, and Lissajous curves (named as such in honour
of the 19th century French scientist, Jules A. Lissajous). These people
were precursors of subsequent explorations in video synthesis and digital
creation
.
Baptised by some of these artists with terms like "electronic painting"
(Götz), "electronic oscillations and abstractions" (Laposky),
"abstronica" (Bute) or "chromophony" (Vitkine), these
experiments must be understood in a context of expansive techno-aesthetic
research, which, in a wider sense, concerned multiple and stereoscopic
projections, optical printing techniques, sound synthesis, musical and
visual creation by mechanical and electronic means, and often a sort of
technological, empirical DIY, as a means to push beyond the conventional
means available.
This backdrop is also valid for the principal pioneers of computer animation
in the United States, among whom we could cite Charles Csuri, Stan Vanderbeek,
Lillian Schwartz, John Stehura, Jules Engel, Ed Emshwiller and, above
all, what we could call the "Whitney dynasty", formed by brothers
James and John Whitney, the latter's wife - artist Jacqueline Helen Blum
- the couple's three children - John Jr. (founder of companies like Digital
Productions and Optomystic), Michael, and Mark - and a number of disciples
and collaborators including Larry Cuba and Gary Demos; a dynasty that
is divided between faithfulness to non-objective animations and visual
music, and a good nose for business in commercial computer graphics.
Towards the end of the 50s, John Whitney built himself an animation machine
from recycled military surplus components. It was his own mechanical analogue
apparatus, and though still in a DIY stage, it was precise and full of
opportunities, as can be seen in his film Catalog (1961) - conceived as
a sort of "demo reel" - even more so in his brother James' extraordinary
piece, entitled Lapis (1963-66). The series places greater emphasis on
John Whitney's contribution, including all his distributed work: from
the abstract film and synthetic sound experiments he did with his brother
in the 40s, through to what would be his last film, Arabesque (1985).
Until his death, in 1995, John Whitney devoted his time to creating an
interactive system for real-time audio-visual composition.
The films created by Larry Cuba, nicknamed "the Bach of abstract
animation" by Gene Youngblood, are no less brilliant for their scarcity.
Recently, he has promoted The iota Center, an organisation devoted to
the "art of light and movement". This wider concept is also
reflected in the series through intercalated references towards other
techniques and machines that have served to bridge the gap between luminous
and electronic art, glimpsed at through the work of Moholy-Nagy, Fischinger,
Thomas Wilfred and Nicolas Schöffer, among others - along with Farblichtspiele
by Schwerdtfeger or the film by Hirsh entitled Gyromorphosis. The programming
also includes the latest contributions to experimental animation and absolute
cinema by Cuba and other film-makers like Paul Glabicki, Robert Darroll
or Bart Vegter, who, in recent years, have also incorporated the computer
into their working methods.
Finally, other approaches have been taken by Europeans like Marc Adrian,
Malcolm Le Grice and Pierre Rovère, in some of their first cybernetic
incursions. For example, the cinema of systems - structural, materialist,
or material-based - which were initiated towards the end of the 50s and
which, in a wider sense, allow the establishment of links with works by
other film-makers like Peter Kubelka, Kurt Kren, Paul Sharits, Taka Iimura,
Werner Nekes, Bill Brand, Christian Lebrat and Joost Rekveld, whose algebra
operates with the frame rather than the bit, and with the impression of
light rather than electronic particles. These cross references connect
back once again with others that were mentioned in earlier programmes
in the series, as a means to further enhance the subject matter with a
dual significance. On one hand, formulating, revealing or suggesting the
synchronic or diachronic relationships that give
context to aesthetic rather than technological explorations;
while at the same time, offering a variety and heterogeneity instead of
the repetitiveness of a selection whose sole criteria is that of an archaeology
of computer animation. The aim is to break with the trivialised perception
of the early cybernetic aesthetic - formalist, decorative or psychedelic
- still widely held today.
|
|