i-DAT Launches: OP-SY.com

i-DAT Launches: OP-SY.com

i-DAT launches OP-SY.com – a central repository for its Operating Systems.
i-DAT is developing a range of ‘Operating Systems’ to dynamically manifest ‘data’ as experience in order to enhance perspectives on a complex world.The Operating Systems project explores data as an abstract and invisible material that generates a dynamic mirror image of our biological, ecological and social activities.


The Operating Systems project proposes a range of tools and initiatives that have the potential to enhance our ability to perceive and orchestrate this mirror world.The intention with the range of Operating Systems outlined on this site is to make the data generated by human and ecological activity tangible and readily available to the public, artists, engineers and scientists.

Update: OP-SY.com is now retired but can be found here:
https://i-dat.org/op-sy-2/

Digital Alchemy

Curtains for the Albertian Window ~ Digital Alchemy for the Scale Electric.

The presentation playfully explores the shifting frames of reference that occur as the result of our immersion in digitally augmented environments. From mobile phones, GPS devices, RFID tags, data feeds and video streams our understanding of our place within the world has never been more complicated.
With the ‘thingification’ of the Internet an invisible ‘Hertzian’ landscape has been made accessible through instruments that can measure, record and broadcast our deepest fears and desires. Seeping out of our computers, infesting our white goods, our cars all shouting advice and our ornaments remembering things we would rather forget, the ‘virtual’ is converging with the physical and their mutant offspring are shaping our future.
The focus will be on a number of technologies and creative strategies developed by i-DAT and its collaborators. Collectively described as ‘Operating Systems’ these digital tools are designed to lift the veil on this invisible and temporal world.
Underpinning these Operating Systems is the understanding that the material for manifesting things that lie outside of the normal frame of reference is ‘data’ – things so far away, so close, so massive, and so small and so ad infinitum. These digital practices use alchemical processes that enable a series of transformations: from data to code to experience to behaviour.
Some times 3 dimensions are just not enough.

MARCEL·LÍ ANTÚNEZ ROCA. NEW TECHNOLOGIES AND PERFORMING ARTS, TRANSDISCIPLINARY APPROACHES.

MARCEL·LÍ ANTÚNEZ ROCA. NEW TECHNOLOGIES AND PERFORMING ARTS, TRANSDISCIPLINARY APPROACHES.

MARCEL.Lì ANTUNEZ ROCA. NUOVE TECNOLOGIE E ARTI PERFORMATIVE, APPROCCI TRANSDISCIPLINARI

11 October 2010 – 12 October 2010

Laboratorio multimediale “Guido Quazza” – Palazzo Nuovo, Via Sant’Ottavio 20, Torino Accademia delle Belle Arti – Via dell’Accademia Albertina 6 Torino

Officine Sintetiche 2010 organises a study day and a workshop dedicated to the artist Marcel·lí Antúnez Roca (co-founder and ex-member of the Avant-garde theatre group La Fura dels Baus), in order to analyse theoretical approaches and artistic practices influenced by the use of the new information and communication technologies.

The Media and the New Media transform the art practice and generate new ways of expression, as a result of the interaction between technological material and artistic creativity. Technicians, artists, engineers and creators tend to work in closer and closer contact, and establish new procedures that are very strongly influenced by the nature and the particularities of the communication technologies.

The resulting new theater platform has in Marcel·lí Antúnez Roca one of its most important and international representatives. He brings together on the stage interactive video and sound, techno-body interfaces, sensors and network devices, thus enhancing a performativity that dis-unites the traditional boundaries of the artistic disciplines. Theatre, cinema and contemporary art get nowadays confronted on the new grounds of technological creativity.

11 OCTOBER STUDIES DAY

Free entrance. Laboratorio multimediale “Guido Quazza” – Palazzo Nuovo, Via Sant’Ottavio 20, Torino.

9.30-11.00 Opening session with Marcel·lí Antúnez Roca. Keynote speaker: Emma Zanella, Director of the MAGA-Museo Arte Gallarate, and Francesca Consoni, MAGA-Museo Arte Gallarate.

11.00-13.00 Round-table discussion on “MULTIMEDIA DRAMATURGY”. In the last two decades we have witnessed an increased development of the digital performances. Marcel·lí Antúnez is one of the pioneers of the new-dramaturgy that designs the performance and brings together interactive art and performance. In addition he has carried out a specific approach to narrative and drama through the lens of hypertext and multimedia. How does his practice challenge the modern and postmodern performance? Do we have to reconsider the concept of stage dramaturgy under the light of the interactive media? Is digital performance better seen as a continuation of the Avant-garde theatre or does it represent a rather unbridgeable rift?

Participants:
Antonio Pizzo – Università degli Studi di Torino
Steve Dixon – Brunel University of London
Lorenzo Mango – Università degli Studi di Napoli – L’Orientale
Federica Mazzocchi – Università degli Studi di Torino

15.00-17.00 Round-table discussion on “TECHNOLOGICAL PERFORMATIVITY AND CREATIVITY”. Nowadays creativity is deeply altered by the technological practices. The structural and practical features of the digital technologies strike and transform the performativity involved in the artistic creation. This represents a radical breaking point. What are the consequences of such a transformation? What are the new scenarios of technological performativity in the world of art?

Participants:
Tatiana Mazali – Università Telematica Internazionale Uninettuno di Roma
Gianni Corino – i-DAT / University of Plymouth
Mike Phillips – i-DAT / University of Plymouth
Alessandro Amaducci – Università degli Studi di Torino Domenico Quaranta – Accademia di Belle Arti di Brera, Milano

12 OTTOBRE

Accademia delle Belle Arti – Via dell’Accademia Albertina 6 Torino WORKSHOP, conducted by the artist Marcel·lí Antúnez Roca and his technical team, on the interactive systems applied in the performance COTRONE, which début will take place on November 14th at the Cavallerizza Reale, Maneggio di Torino.

It is compulsory to submit a participation request to tatiana.mazali@polito.it before October 7th.

Scientific Committee and organisers:

Tatiana Mazali, Federica Mazzocchi, Antonio Pizzo.

Information:
www.officinesintetiche.it

tatiana.mazali@polito.it

federica.mazzocchi@unito.it

antonio.pizzo@unito.it

Roy Ascott @ INDAF LPDT2/Syncretica

Roy Ascott @ INDAF LPDT2/Syncretica

LPDT2

i-DAT has contributed to LA PLISSURE DU TEXTE 2 (LPDT2) (Incheon International Digital Art Festival 2010 (INDAF), 01-30/09/2010, Tomorrow City, Songdo, Incheon, Korea) a Twenty First Century reimagination of Roy Ascott’s famous telematic work LA PLISSURE DU TEXTE from 1983. This Second Live version (built and enacted by Elif Ayiter , Max Moswitzer and Selavy Oh, in association with Heidi Dahlsveen) is installed at INDAF incorporates an Artificial Intelligence which enables the public to enter into an SMS conversation with the LPDT2 metaverse.

http://www.indaf.org/e_sub02_02.asp

LPDT2

LPDT2

THE SECOND LIFE OF LA PLISSURE DU TEXTE

Roy Ascott 2010

LPDT2 is the sequel to Roy Ascott’s initial La Plissure du Texte, the generic telematic project about distributed authorship, and the pleasure and pleating of the text, created for the exhibition Electra at the Musée d’Art Moderne, Paris in 1983

<http://artelectronicmedia.com/artwork/la-plissure-du-texte>.

Now, three decades later, LPDT2 seeks a new level of artistic creativity and technological expertise, dealing with distributed authorship in the metaverse of Second Life, involving textual mobility and the fluidity of an emergent poesis. Just as, in the first LPDT, when artists explored the telematic technology of the early 1980s, LPDT2 involves leading artists and designers in Second Life, and their associates, in the conception and construction of worlds of non-linear text, transforming the metaverse into a purely textual domain. The field of operations is a horizontal screen: the table-top motif that runs throughout Ascott’s oeuvre.

Principal Co-Authors

Elif Ayiter aka Alpha Auer is a designer and researcher specializing in the development of hybrid educational methodologies between art & design and computer science, teaching full time at Sabanci University, Istanbul, Turkey. She has presented creative as well as research output at conferences including Siggraph, Consciousness Reframed, Creativity and Cognition, ISEA, ICALT, Computational Aesthetics (Eurographics) and Cyberworlds. She is also the chief editor of the forthcoming journal Metaverse Creativity with Intellect Journals, UK and is currently studying for a doctoral degree at the Planetary Collegium, CAiiA hub, at the University of Plymouth with Roy Ascott. http://syncretia.wordpress.com/ http://alphatribe.tumblr.com/ http://www.citrinitas.com/

Max Moswitzer, born1968, lives and works in Vienna and Zurich. Moswitzer’s output is in Fine Art and the construction of playful situations, using dérive and détournement as methodology for transformation and reverse engineering of networked computer games and art systems. Since 1996 provides his own server <http://www.konsum.net> and is founding member of www.ludic-society.net <http://www.ludic-society.net> . In 2007 Moswitzer moved some of his creative practice into the metaverse, i.e., Second Life. His architectural installation „Whitenoise“ was one of four winners for the first Annual Architecture & Design Competition in Second Life, an internationally juried event of Ars Electronica 2007. He recently completed „Ouvroir“, a virtual museum in Second Life for Chris Marker commissioned by the Museum für Gestaltung, Zürich.

Selavy Oh was created in 2007 as an avatar in Second Life, where she works using the virtual world as medium. She presented her work in solo exhibitions within Second Life, e.g. at IBM exhibition space, Arthole Gallery, and Odyssey. Her work was selected for the Final 5 exhibition of the mixed-media project “Brooklyn Is Watching” at the Brooklyn art gallery Jack The Pelican Presents. Her work has been covered by prestigious web publications such as SmartHistory and art:21. Selavy’s creator works as neuroscientist at the University of Munich investigating topics from spatial perception over computational neuroscience to human-robot interaction.

Associates

i-DAT is a networked entity catalyzing Art, Science and Technology research [www.i-dat.org]. Chris Saunders is a Research Assistant at i-DAT and a digital media developer for organisations as diverse as Deutsche Bank and Creative Partnerships. Mike Phillips is the director of i-DAT and Professor of Interdisciplinary Arts, University of Plymouth. His private and public sector grant funded R&D orbits digital architectures and transmedia publishing, with particular application to ‘Full Dome’ immersive environments and data visualization. i-DAT’s LPDT2 SMS augmentation enables visitors to the LPDT2 installation to SMS the system through an Artificial Intelligence (AI) that feeds the Second Life environment. The LPDT2 AI learns, interprets and evolves through its mediation between the installation and visitors.

Heidi Dahlsveen aka Frigg Ragu is a storyteller and assistant professor at Oslo University College, touring in Scandinavia as well as internationally, performing stories for children and adults. In 2009 she published her first storytelling book. Her main occupation and interest in the virtual world are the performing arts and how to tell stories through poses and animations. Dahlsven was was given a grant from the Norwegian Arts Council to research and compare performing arts in virtual world with real life in 2009/2010. <http://www.dahlsveen.no>

Scale Electric… 19 & 20/07/2010

Scale Electric… 19 & 20/07/2010

[Scale Electric PDF]

Introduction…

The Scale Electric workshop (19 & 20/07/2010) couples the power of the Atomic Force Microscope to touch the infinitesimally small with the potential of the Full Dome environment to immerse participants in visualisations of the incomprehensibly big.

Throughout the last Century we were reintroduced to the idea of an invisible world. The development of sensing technologies allowed us to sense things in the world that we were unaware of (or maybe things we had just forgotten about?). The Scale Electric – the invisible ‘hertzian’ landscape was made accessible through instruments that could measure, record and broadcast our fears and desires. These instruments endow us with powers that in previous centuries would have been deemed ‘occult’ or ‘magic’.

Our Twenty First Century magic instruments mark a dramatic shift from the hegemony of the eye to a reliance on technologies that do our seeing for us – things so big, small or invisible that it takes a leap of faith to believe they are really there. Our view of the ‘real world’ is increasingly understood through images made of data, things that are measured and felt rather than seen. What we know and what we see is not the same thing – if you see what I mean?

Our ability to shift scales, from the smallest thing to the largest thing has been described as the ‘transcalar imaginary. The workshop will enable participants to touch the nano level and then immerse themselves within it through visualisations and sonifications.

Context:

Scale Electric extends a series of collaborative projects orbiting i-DAT’s research agenda. It builds on:

practical workshops to explore the application of novel and innovative technologies to creative practice (http://www.i-dat.org/2006-slidingscale/, http://www.i-dat.org/far-away-so-close/, http://www.i-dat.org/ahobartletti-dat/, etc)

projects with the Immersive Vision Theatre (a 40 seat 9m Full Dome digital projection system) a transdisciplinary instrument for the manifestation of material, immaterial and imaginary worlds – modelling, visualization, sonification and simulation.

research projects such as Arch-OS and Ecoid’s which stream real time data to facilitate insights into complex temporal architectural and ecological systems (http://www.arch-os.com/)

and more recently nano technology projects in collaboration with the Wolfson Nanotechnology Laboratory and John Curtin Gallery, Perth, WA – Art in the age of nanotechnology, 5/02 – 30/04/2010 (http://johncurtingallery.curtin.edu.au/)

Output generated by this workshop will contribute to the Ubiquity Journal Published in 2011 by Intellect. (http://ubiquityjournal.net/, http://www.intellectbooks.co.uk/journals/index/).

Scale Electric explores some of the ‘transcalar” (http://www.elumenati.com/products/TInarrative.html) conundrums that are increasingly intruding into our daily consciousness.

Schedule…

Monday 19/07/2010

10.00-10.15: Introductions, Briefing: Location – Babbage 213

10.15-10.30: Presentation 1: Prof Mike Phillips.

10.35-10.50: Presentation 2: Dr Chris Speed.

10.55-11.10: Presentation 3: Prof Genhua Pan.

11.15-11.30: Presentation 4: Pete Carss.

12.00-12.30: Tour of the AFM & IVT

12.30-13.30: Lunch

13.30-14.30: Production Planning: Location – Babbage 213

14.30-17.30: AFM Scanning: Location – The Wolfson Nanotechnology Laboratory,

Tuesday 20/07/2010

10.00-10.30: Briefing: Location Babbage 213

10.30-12.30: Project development AFM & IVT

12.30-13.30: Lunch

13.30-15.30: Project development AFM & IVT

15.30-17.30: IVT Manifestations

Process…

A: Experiencing Atoms:

The first practical session will utilise the AFM in the Wolfson Nanotechnology Laboratory to produce data and images. The materials themselves will be defined during the morning session. Participants will be asked to propose matter and associated narratives for examination.

B: Modelling Experience

Software templates will allow the interpretation and visualisation of the data gathered by the AFM. These visualisations will be hacked, tweaked and ultimately experienced within the Immersive Vision Theatre.

Project Team…

Pete Carss (http://www.i-dat.org/pete-carrs/)

Prof Genhua Pan (http://www.plymouth.ac.uk/staff/gpan)

Prof Mike Phillips (http://www.i-dat.org/mike-phillips/)

Dr Chris Speed (http://fields.eca.ac.uk/?page_id=65)

Supported by…

The Institute of Digital Art & Technology: [http://www.i-dat.org/]

Manifest Research Group

The Wolfson Nanotechnology Laboratory

The Centre for Media Art & Design Research

Ubiquity Journal


Le corps jouable, Marc Fournel, vidéos et installations.

Le corps jouable, Marc Fournel, vidéos et installations.

lecorps
DAGAFO and [Séquence] are pleased anounce the launch of The Playable Body, Marc Fournel, videos and installations at OBORO on March 26, 2010 from 5 to 7 pm, 2001 rue Berri, Montréal.
The Playable Body, Marc Fournel, videos and installations is a publication about the artistic work of Marc Fournel from 1995 to 2009 that comprises texts in English and French by authors Fabrice Montal, Caroline Seck Langill, Jean Gagnon, and Mike Phillips. The monograph also includes a short text by the artist about the enigmatic titles of his works. The hardcover book is illustrated with color photos.
The Gallery [Séquence] in Chicoutimi and DAGAFO have collaborated for this publication. Volumes, essays, and critical articles on works of contemporary Quebecer artists whose approach is based on new media are rare. This edition makes us discover a particular figure in the landscape of media art, while at the same time contributing to the necessary documentation of these artistic practices.
Marc Fournel, after some videographic productions between 1995 and 1998, begins to more systematically explore the application of informatics and softwares, particularly open-source softwares. Since 2004, he has devoted himself full-time to his artistic research, which has allowed him to produce a number of works, but also to develop his informatics tools and instruments. His artistic work has been presented nationally and internationally, notably at OBORO in Montreal, [Séquence] in Chicoutimi, Interacess in Toronto, at the International Festival of Video Art of Casablanca, at L’IRCAM in Paris, and at the Foundation Telefonica in Buenos Aires.
DAGAFO is a non-profit organization founded in Montreal in 2007 by Ricardo Dal Farra, Jean Gagnon, and Marc Fournel. DAGAFO supports, develops, and produces projects—by means of exhibitions and publications —that foster cultural exchange and relations on a national or international level. DAGAFO thanks [Séquence], Chicoutimi (QC) and i-DAT, Faculty of Technology, University of Plymouth (UK) for their collaboration in this publication, as well as the Canada Counsel for the Arts for its financial support.
[Séquence] is an important regional center in Saguenay for the production and presentation as well as the development of media art and new media in Quebec. The center’s implication in the development of international relations has allowed it to put in place a strong network of exchange, presentation, and production of media art works. [Séquence] would like to thank: DAFAGO, the Conseil des arts et des lettres du Québec, the Conseil des Arts de Saguenay, and the city of Saguenay.
Available after book launch through the website of RCAAQ: http://rcaaq.org/librairie/

Aggregator v1.0 – 27/02/2010

Aggregator v1.0 – 27/02/2010


Aggregator v1.0 builds on a suite of creative ‘tools’ or ‘operating systems’ that dynamically manifest ‘data’ as an abstract and invisible material, forming a mirror image of our world and reflecting, in sharp contrast and high resolution, our biological, ecological and social activities.
Aggregator v1.0 generates an audio/visual immersive experience of data feeds from web 2.0 platforms, news feeds, networks, buildings, and satellites all orchestrated through subtle audience interaction.
Aggregator v1.0 is a evolving generative performane and the audience is able drop in and out during the session.
Aggregator v1.0 coding and composition by Pete Carss.
Aggregators: Pete Carss and Mike Phillips.
Aggregator v1.0 is a component of the Peninsula Arts Contemporary Music Festival 2010
Date: Saturday 27 February.
Venue: Immersive Vision Theatre.
Time: 12:00pm – 4:00pm.
Admission: FREE.
Pete - Live CodePete - Live Codeintrocode surfaceimage 3image 4image 5
http://www.youtube.com/watch?v=RDyk9rj6QGo
Live coding application – Fluxus (care of Dave Griffiths):
http://www.pawfal.org/fluxus/
Audio feeds:
PALAOA Audio Observatory (microphone under ice)
http://icecast.awi.de:8000/PALAOA.MP3
Air Traffic Control:
http://mso.liveatc.net:80/khnd1
http://aus.liveatc.net:80/sbbr_acc
Calm noises:
http://www.whitenoise247.com/Sounds/CalmSeaWaves.wav
http://www.whitenoise247.com/Sounds/river_full.wav
Natural Radio:
http://mp3.nasa-us.speedera.net:8000/mp3.nasa-us/florida1
http://67.207.143.181:80/vlf1
http://67.207.143.181:80/vlf3
http://67.207.143.181:80/vlf9
http://67.207.143.181:80/vlf15
http://194.116.73.37:8000/pontese124.m3u
http://icecast.nis.nasa.gov:8000/florida1
http://picasso.astro.ufl.edu:8000/icy_1
Radio Astronomy:
http://28.72.128.252:8000/radast
Fluxus sample code:
;(require fluxus-016/drflux)
(require fluxus-017/planetarium)
;(set-dome-mode! #t)
(smoothing-bias 2)
(clear)
;(clear-colour 0)
;(blur 0.1)
;(fog (vector 0.1 0.1 0.1) 0.2 0.01 0.1)
(ortho)
(define dome (dome-build 10 180 2048))
; buffersize and samplerate need to match jack’s
(start-audio  “MPlayer” 1024 48000)
(define (render count)
(cond
((not (zero? count))
(translate (vector 0.1 0.1 (* 10 (gh 4))))
(scale (vector 2 2 1))
(rotate (vector (gh 4) (gh 5) (gh 6) ))
(colour (vector (* 0.5 (gh 4)) 0.2 (* 0.5 (gh 10))
0.3))
(opacity 0.3)
(draw-torus)
(render (- count 1)))))
;(with-state
;(rotate (vector 0 -25 0))
;(render (- count 1))
;(draw-cube)
;set the view of the camera
(dome-setup-main-camera 1400 1050)
(every-frame
(with-pixels-renderer (dome-pixels)
(with-state
;(rotate (vector 0 0  (* 90  (cos(/ (time) 10)))))
(translate (vector 0 0 -100)) ; move it into view
(render 10)))

TOPLAP UK presents…

TOPLAP UK presents…

TOPLAP UK presents: Livecoding at Plymouth University’s Immersive Vision Theatre

20th September 2009 6pm

What is livecoding? Recently featured on the BBC website (http://news.bbc.co.uk/1/hi/technology/8221235.stm) livecoding is a virtuosic computer music and video performance technique which involves the creation of a sound and/ or video generating computer program in real time in front of an audience. The performers’ screens are projected so that the audience can see the creation of the program as well as seeing and hearing its output.

Who is performing? In this performance, slub (Alex Mclean and Dave Griffiths) and Wrongheaded (Matthew Yee-King and Click Nilson) will perform in the state of the art Immersive Vision Theatre at Plymouth University, which is described as a ‘transdisciplinary instrument for the manifestation of material, immaterial and imaginary worlds’ (http://www.i-dat.org/toolbox). It features full dome projection and a sophisticated spatial sound system. This is the first time that livecoding has been performed using such a system – a world first in Plymouth! As such, this performance represents a prototype for a planned tour of UK planetaria. Expect a variety of high tech electronic sound and visuals as well as some algorithmic choreography.

Contact: Pete Carss peter.carss@plymouth.ac.uk

Further info: TOPLAP UK (the loose organisation behind the gig) http://toplap.org/uk/

Alex Mclean (slub): http://slub.org/ http://yaxu.org/

Dave Griffiths (slub): http://www.pawfal.org/dave/

Matthew Yee-King (wrongheaded): http://www.yeeking.net

Click Nilson (wrongheaded): http://www.irefusetobeontheweb.com

The Immersive Vision Theatre: http://www.plymouth.ac.uk//pages/view.asp?page=18227


Eco-OS Workshop – Ecoid Prototype.

Eco-OS Workshop – Ecoid Prototype.

Ecoid workshop and prototype development with site testing in Nagoya Japan.

A component of Eco-OS:

 

 

i-DAT is developing a range of ‘Operating Systems’ which dynamically manifest ‘data’ as experience and extend human perception. Arch-OS [www.arch-os.com], an ‘Operating System’ for contemporary architecture (‘software for buildings’) was the first i-DAT ‘OS’, developed to manifest the life of a building (currently being installed as the i-500 (www.i-500.org) in Perth Western Australia.

Eco-OS explores ecologies. Eco-OS further develops the sensor model embedded in the Arch-OS system through the manufacture and distribution of networked environmental sensor devices. Intended as an enhancement of the Arch-OS system Eco-OS provides a new networked architecture for internal and external environments. Networked and location aware data gathered from within an environment can be transmitted within the system or to the Eco-OS server for processing.

breadboardProcessing ScreenEcoid Workshopecoid v1chris japan configurationNagoyaEcoid on site, Japan

Eco-OS collects data from an environment through the network of ecoids and provides the public, artists, engineers and scientists with a real time model of the environment. Eco-OS provides a range of networked environmental sensors (ecoids) for rural, urban, work and domestic environments. They extend the concept developed through the Arch-OS and i-500 projects by implementing specific sensors that transmit data to the Operating Systems Core Database. Eco-OS also enables the transmission of data back to the Eco-OS ecoids to support interaction with the environment (such as light shows and the transmission of audio/music in response to the network activity).

Eco-OS Core Database: is an extension of the established Arch-OS Core database. The Eco-OS Core collects the data transmitted to it by the ecoids. The data is parsed up and published through a range of flexible tools (flash, Max MSP, Processing, Java, etc), feeds (xml, rss) and web 2.0 streams, such as Twitter and Facebook, which allow artists, engineers and scientists to develop visualisations, sonifications (music) and interactive projects. Eco-OS can operate in passive mode, simply collecting data from the environment or interactive mode, feeding back recursively through the environment.

Ecoids: are sensor devices (small pods) that can be distributed through an environment (work place, domestic, urban or rural). The sensors allow environmental data to be collected from the immediate vicinity. The sensors can be connected together through the formation of Wireless Sensor Networks (WNS) that enable the coverage of an extensive territory (several kilometres). Each ecoid has a unique id and its location within a network can be triangulated giving its exact location. Consequently locative content can be tailored to a specific geographical area.

Ecoids consist of programmable (Processing, Java, etc) embedded technologies (Arduino, etc) and network technologies (Zigbee/Xbee, GPRS and Bluetooth). Designed to be attached to objects (architecture, trees, rocks, etc), free form (water-based, balloons, free standing) or as mobile sensors. They can be powered or draw power from the environment (solar).

Ecoids can also be used to produce content be receiving instructions from Eco-OS. Distributed performance can then be orchestrated across a large territory through light displays or acoustic renditions.

The Operating Systems project explores data: as an abstract and invisible material our potential to perceive our reality through data marks an evolution in human consciousness, the evolution of human perception through the emergence of senses more finely attuned to data!

Data generates a dynamic mirror image of our world, reflecting, in sharp contrast and high resolution, our biological, ecological and social activities. Reluctantly, we are becoming aware of the data shadows that cloud the periphery of our existence, as if through a glass darkly. The reluctance is, to some extent, the result of the fear we feel when we catch a glimpse of this data/mirror world out of the corner of our eye. Somewhere there is an attic, and in that attic stands a large ugly data portrait of our world. Reified its metaphorical and haptic potential are powerful tools for transformation. Operating Systems proposes a range of tools and initiatives that have the potential to enhance our ability to perceive and orchestrate this mirror world.

Eco-OS project and Ecoid development with B Aga, Gianni Corino, Luis Girao, Lee Nutbean, Mike Phillips, Chris Saunders and Chris Speed in Japan.

 

Dr Wolfgang Fiel

Dr Wolfgang Fiel

Born 1973 in Alberschwende/Austria, he studied Architecture at the Vienna University of Technology (MSc) and obtained his Master in Architectural Design at the Bartlett, University College London under the direction of Peter Cook. He is co-founder of tat ort, a Vienna based practice for collaborative work and research on spatial appropriation, collective knowledge and interactivity. Co-founded in 2004 he is Artistic Director of the iCP, Institute for Cultural Policy located in Hamburg. Identified as open platform for prolific exchange between architecture, art, science and industry the iCP organizes exhibitions, lectures and is editing a book series on experimental tendencies in contemporary architecture. 2006 it was invited to participate in the VEMA Web Event as part of the New Italian Pavillions presentation at 10th International Venice Biennale for Architecture. His individual and collective work has been exhibited and published widely. He currently is design tutor at the Institute of Art and Design and has lectured previously at the Institute of Design and Building Construction, both at Vienna University of Technology.
PhD Title: Dissipative Urbanism: From Democracy towards a Constitution of Time.
Given the rapid growth or sheer scale of urban agglomerations all over the world and the repercussions of globalized economies, politics and communication networks for the ‘lived experience’ of daily urban live, the field of urbanism is in dire need of a ‘unitary theory’ that would take account of the most basic issues beyond the boundaries of any discipline in particular, namely the human condition. From there we can start to delve into the diverse realities of individuals, their gathering in groups, their dialogue amongst each other and with their environment in its totality, and the complex interrelations within a highly dynamic network of associations in order to arrive at the question, whether the emergence of a fully emancipated many – as opposed to the One of the state – requires more than the flawed promise of representational democracy to act for the ‘common good,’ or ‘general will’ (Rousseau, 2009 [1762]) of all.
This task, however, is ambitious, for we have to bridge the gap between the needs, aspirations, emotions, anxieties and dreams of individuals on the one hand, and the temporal emergence of collective co‐operation on the other. Furthermore, ‘official’ knowledge, incorporated by endless columns of statistical data, gathered and administered meticulously thanks to the firm grip of institutionalised observation,is of little help, for we realise on a daily basis that the representations thereof are a poor match for the complexity of networked realities ‘on the ground’. At this point we realise that our task is not to provide alternative representations based on presumed universal identity, but to retain the full‐blown heterogeneity of the multitude in order to allow the general intellect to thrive on the activity of the speaker. To speak is to act, and to act is the predominant trait of political praxis. It is through our acts and deeds that we disclose ourselves in public in the presence of others. And it is through acting that we start anew and leave our mark in a situation the moment we intervene in the circulation of empty signifiers upon which we assign a name, the name of an event. It is through our interventional participation that we allow for novelty to emerge in time, as a process without representation and sustained by fidelity. Dissipative urbanism is a statement about difference marked as intervention. This intervention requires the presence of others and the intention to act. It is the emergence of a ‘constitution of time’.
fiel@tat-ort.net