EMERGENCE : RE-COGNIZING COMPLEXITY

EMERGENCE : RE-COGNIZING COMPLEXITY

Mike Phillips, Bill Seaman and Dan Neafus present, chaired by David Mcconville, on the ix panel convened around Pierre Levy’s keynote:
http://ix.sat.qc.ca/node/359?language=en.
Saturday 23 May – 9:00
Phillips presentation, titled: “Die Geister, die ich rief…”  explored the behavioural problems of collaborating with our new found friend, Artificial Intelligence. How AI was probably something always lurking on the fuzzy edge between our desires and nightmares, and that we may just be conjuring it into existence now that a digital substrate has emerged to give it form.
(“The spirits that I called”). Der Zauberlehrling / The Sorcerer’s Apprentice, Goethe, 1797.

Negotiating Principles of Exhibition Design – The National Gallery of Ireland

Negotiating Principles of Exhibition Design – The National Gallery of Ireland

Negotiating Principles of Exhibition Design:
i-DAT’s Paul Green (http://i-dat.org/paul-green/) is presenting “Negotiating Principles of Exhibition Design” at the National Gallery of Ireland Research Day 2015: Friday 6 March 2015. CONDITONS OF DISPLAY:RESEARCH & PRACTICE: The National Gallery of Ireland is holding its fifth postgraduate Research Day exploring new ideas, projects and research.
As the NGI approaches 2016 and the re-opening of its historic buildings, it faces the challenges of re-hanging and re-imagining the collection, all of which will be informed by the myriad conditions and concerns of display.
Venue: Gallery Lecture Theatre; Admission free, book in advance to education@ngi.ie
For information contact education@ngi.ie or 01 663 3509 or 663 3579
http://www.nationalgallery.ie/en/Learning/Adult_Programmes/Research_Day_2015.aspx

MEDIACITY 5: May 1-3 2015

MEDIACITY 5: May 1-3 2015

MEDIA CITY 5 International Conference and Exhibition 1-3 May 2015 Plymouth, UK
THE FIFTH MEDIACITY CONFERENCE REFLECTS ON A SOCIAL SMART CITY.
We invite contributions in the form of research papers, projects and case studies.

http://mediacity.i-dat.org/

The conference programme will focus on contributions that are high quality, reflective, thoughtful and challenging.
We anticipate contributions from academics, practitioners, activists close to disciplines such as media studies, architecture, urban studies, cultural and urban geography and sociology –using innovative ways and reflecting critically on processes, methods and impacts of public participation and technologies in urban realm, within their theoretical and practical research, teaching, or activism roles.


MEDIA CITY 5 is jointly organised by:
School of Architecture, Design & Environment  &  i-DAT (Institute of Digital Art and Technology)
Plymouth University
Plymouth, PL4 8AA, UK.

FulldomeUK 2014 winners

FulldomeUK 2014 winners

The winners of FulldomeUK 2014 have been announced.
FulldomeUK is a festival of fulldome art that we co-founded and that we help to run. This took place late last year, and the winners – the people at the cutting edge of the cutting edge of media – have been announced.
These films were selected by an independent jury from a shortlist curated by the festival’s organising committee. And the winners are (with a taste of the judges’ comments)…
Best in ShowBeat
“Beat has the best potential for bringing visual storytelling to the Fulldome form, providing a psychological landscape. Abstract expressionism finally has a voice through Beat”
Best studentDie Wundertrommel
“Playful exploration of Fulldome and a cool reference to old tech zoetrope – punches above its weight”
Best Use of DomeInfinite Horizon
“Elegant and minimal, understated immersion. Great soundscape.”
Best Sonic Experience  – Ride Zero
“Great immersive sonic expression, synthetic experience… big up the jungle massive!”
Best Experimental Beat
“New perspectives and exciting potential for collaged video in Fulldome.”
Best NarrativeVessel
“Exceptional presentation of narrative form within the dome medium.”
 

New phone app Artory promises to boost Plymouth’s culture circulation

New phone app Artory promises to boost Plymouth’s culture circulation

A brand new app promising to be the ‘ultimate guide to Plymouth’s art and culture’ soft-launches next month (December) – and it’s based on i-DAT’s Qualia emotion-measuring technology.

Artory is a free app that leads users to the city’s culture hotspots and then rewards them with exclusive offers. Artory-users will have a chance to earn Art Miles by visiting venues and leaving feedback. These can be exchanged in participating cultural venues all over the city for drinks, discounts and VIP offers.
Artory-users will have a chance to earn Art Miles by visiting venues and leaving feedback. These can be exchanged in participating cultural venues all over the city for drinks, tickets or discounts.
Venues and attractions will be able to fill the app with what’s on listings and events, helping to promote Plymouth’s cultural assets to a connected audience of city residents and visitors.
Art Miles earned in one venue can be used in another venues, thanks to the collaborative approach taken by the organisations involved.

Artory will be available in app stores for both iPhones and Android devices from December 15. The app’s official launch will be in January 2015.

Although what’s on apps are commonplace, the crucial difference with Artory is that it offers visitors incentives for leaving feedback about what they thought about the show, the exhibition, the film or the attraction.

This is because Artory is based on the ‘analytics engine’ Qualia, developed by i-DAT at Plymouth University, University of Warwick and Cheltenham Festivals 2013. This mood-measuring technology makes it easy for app-users to record their feelings and emotions about the art and culture they’ve just viewed.

This is a huge step forward from the usual feedback forms that present culture fans with paperwork just after they’ve experienced a show or a performance.

Evaluating audience feedback is a vital task for culture organisations, giving them important information that can support funding applications or direct future programming. So by making that data-collection easy, fun and tangibly rewarding, Artory helps both the city’s culture attractions and its visitors.
The app’s launch marks the culmination of a year of work for arts organisations working together to boost local culture, despite Plymouth’s unsuccessful bid to be City of Culture 2017.

This city-wide initiative has been led, designed and produced by i-DAT and Barbican-based Plymouth Arts Centre (in conjunction with Elixel and the Plymouth Culture Guide Group: Theatre Royal, Barbican Theatre, Plymouth City Museum and Gallery, The Gallery Plymouth College of Art, Peninsula Arts Plymouth University, KARST, Ocean Studios, Take a Part, Effervescent, Plymouth Dance, Plymouth Culture Board).

The app is funded by i-DAT, Plymouth Arts Centre, Destination Plymouth, Plymouth City Council and Plymouth Culture Board.

The software behind Artory is open-source, meaning that once it has been piloted in Plymouth, it will be available for use by other cities to promote their cultural activity.

Venues participating at present include Theatre Royal, Plymouth City Museum and Art Gallery, Ocean Studios, Peninsula Arts at Plymouth University, Barbican Theatre, KARST, The Gallery at Plymouth College of Art, Take a Part, and Plymouth Dance.

CYNETart, Dresden, Germany, November 13-19

CYNETart, Dresden, Germany, November 13-19

We’ll be representing dome art in Germany this weekend, thanks to our partnership in a European dome network.

https://i-dat.org/emdl-european-mobile-dome-lab/

We are attending Cynetart in Dresden, from November 13-19, taking research work that fuses our immersive dome technologies with phones, performance, gamification and participation.


We are there because of our status as UK partners in the European Union-funded project E / M / D / L /  – European Mobile Dome Lab for artistic research.
Cynetart is a festival of computer-based art, science, culture & media technology.

Explorations in immersive vision take us round in (international) circles

Explorations in immersive vision take us round in (international) circles

A futuristic festival that i-DAT helped to found, takes place at the National Space Centre this autumn.
Fulldome UK 2014 takes place on November 7 & 8 and offers 2 days of inspirational screenings, live VJ performances, radical debates and forward-thinking visions in sound and image.
The event takes place at the National Space Centre in Leicester and is open to the public. Tickets are available online here: www.fulldome.org.uk/tickets
Visitors can expect to see ‘fulldome art’ – an emergent artform using immersive environments and digital technologies to push the boundaries of artistic practice. International and UK fulldome film-makers, audio researchers and programmers will display their works at the event.
Fulldome works can be linear and non-linear, produced or generative, interactive and performative experiences projected onto the ‘full’ domed surface traditionally found in planetaria.
This makes for a highly immersive audience experience, challenging established models of cinema and gallery spaces.
Fulldome UK 2014 will host work by the following, and many more:

The festival – co-founded by i-DAT – is in its 4th year, and is run by a not-for-profit association that supports artists and researchers working within fulldome immersive environments including i-DAT’s Professor Mike Phillips, GaiaNovaThe Computer Arts Society (CAS) and the National Space Centre (NSC) through NSC Creative.
“We’re defining the emergent artform of fulldome art with collaborations with the world’s leading performers, projection mapping experts and VJs. It’s super-cool”, said Mike.
i-DAT is awaiting news on Arts Council England funding for staging Fulldome UK 2014 in Leicester.
Previous festival action happened in August at Kendal Calling. Fulldome UK curated a cross-section of some of its best immersive audio-visual short fulldome films to support a playback of the ground breaking fulldome album The Search Engine by Ninja Tunes artist DJ Food – making its UK music festival premier. There were also lectures that flew festival-goers through the Universe and beyond the stars!
Back in June and July i-DAT hosted a week-long E/M/D/L fulldome prototyping workshop in Plymouth, inviting international participants to experiment with the platform.
The artistic research that took place during the workshop was in the areas of projection mapping, performance and interactivity “contributing to a redefinition of fulldome art,” said Mike Phillips.
i-DAT is the UK partner of E/M/D/L – The European Mobile Dome Lab for Artistic Research – an international collaboration awarded €400k by the EU Culture Programme.
E/M/D/L is a network for the exchange of artistic and technological expertise within the full-dome medium. The partnership connects four European and three Canadian institutions and cultural partners, all leaders in this field, sharing and expanding skills, methodologies, strategies and content.
The project began in February this year and by September 2015, there will have been eight residencies and public presentations in five countries, using a mobile domic architectural structure equipped with cutting-edge technologies.
E/M/D/L will climax with a series of performances at the world’s most sophisticated virtual theatre, the Satosphere in Montreal, Canada.
Partners in E/M/D/L include i-DAT, the University of Applied Arts in Vienna, Austria, the Trans-Media Academy Hellerau/CYNETART Festival in Dresden, Germany, the National and Kapodistrian University of Athens in Greece, the Society for Arts and Technology and kondition pluriel in Montréal, Canada, and LANTISS (Laboratoire des Nouvelles Technologies de l’Image, du Son et de la Scène)/Université Laval, Quebec City, Canada.

i-DAT & IBM thinking smarter together

i-DAT & IBM thinking smarter together

One of the biggest names in the digital realm was the guest of i-DAT and Plymouth University earlier this month for the Smarter Planet Lab event.
The event was open to Plymouth University staff, students, researchers and IBM staff with an interest in the Smart Agenda / Internet of Things.
The conference stimulated discussion and an exchange of ideas around specific themes including: art and audience, culture and heritage, digital cities / digital civic, environment and sustainability.

At the conference, IBM backed i-DAT’s research ethic. Said i-DAT Creative Director Birgitte Aga: “IBM backed up our approach. We’re about research and prototyping. We’re not about development – that’s for others to do. That’s what allows us to be at the frontier and to keep experimenting”.

Smile – you’re on social media

Smile – you’re on social media

We will be using artificial neural networks in our latest project, SMILE.

http://culturesmile.org/

We successfully bid to the Arts & Humanities Research Council for a project using Social Media to Identify and Leverage Engagement and now we’ll be working with University of Cambridge Museums and Visual Arts Southwest.

We’re putting our best brains on it: robot ones. “We’ll be using artificial neural networks to analyse social mood,” said i-DAT’s Mike Phillips.
The project extends the work we’ve done in partnership to develop Qualia – our sentiment analysis app that measures arts and culture audience mood and incentivises audiences to leave their emotional feedback.

For SMILE, we’ll be including international expertise and cross disciplinary working, spanning arts technology, communication, sociology and computer science, to deliver new insights about social media analytics and develop an open-source sentiment analysis tool with improved accuracy and ‘calibrated to the arts and culture discourse’.

 

Introduction

Our implementation is focused on extracting features from the raw data, while taking into account the temporal aspects of the problem. We merge the ideas put forward by DNNs and RNNs, trying for a system that self organises the representation of the data, and accommodates the temporality of language. The system should work as an encoder, transforming and compacting the data in a way that best suits the data. It is important to notice here the freedom of the system in the fact that there has been no intervention or assumptions in the way knowledge should be organised or extracted other than what is imposed by the data.

Our data are formed of Tweets, Geolocation, Timestamps, etc., collected from different festivals around the UK and our goal is to extract interesting features from the data presented. We believe that given the amount of data we have, emergent properties would be useful for the explanation or even provide meaningful insights of how the data could be manipulated. Given that the data are closely correlated with the behaviour of the attendants of the festival, a reverse procedure could influence their behaviour and reaction to events organised by the festival.

Background

Conventional machine learning techniques have limitations in their ability to process raw data.

The implementation of such methods often requires domain expertise and delicate engineering. On the other hand Deep Learning algorithms have shown another way forward. Representation learning allows for the discovery of suitable representations from the raw data.

By passing the data through multiple non­linear layers, each layer transforms the data to a different representation, having as input the output of the layer below. Due to the the distributed way of encoding the raw input, the multiple representation levels and the power of composition; deep networks have shown promising results in varying applications, and established new records in speech recognition, image recognition.

By pre­training layers like these, of gradually more complicated feature extractors, the weights of the network can be initialised in “good” values. By adding an extra layer of the whole system can then be trained and fine tuned with standard backpropagation. The hidden layers of a multilayer neural network are learning to represent the network’s inputs in a way that makes it easier to predict the target outputs. This is nicely demonstrated by training a multilayer neural network to predict the next word in a sequence from a local context of local words.

When trained to predict the next word in a news story, for example, the learned word vectors for Tuesday and Wednesday are very similar, as are the word vectors for Sweden and Norway. Such representations are called distributed representations because their elements (the features) are not mutually exclusive and their many configurations correspond to the variations seen in the observed data. These word vectors are composed of learned features that were not determined ahead of time by experts, but automatically discovered by the neural network. Vector representations of words learned from text are now very widely used in natural language applications.

Another type of networks that have shown interesting results are Recurrent Neural Networks (RNN). RNNs try to capture the temporal aspects of the data fed to them, by considering multiple time steps of the data in their processing. Thanks to advances in their architecture [9,10] and ways of training them [11,12], RNNs have been found to be very good at predicting the next character in the text [13] or the next word in a sequence [7], but they can also be used for more complex tasks. For example, after reading an English sentence one word at a time, an English ‘encoder’ network can be trained so that the final state vector of its hidden units is a good representation of the thought expressed by the sentence.

Despite their flexibility and power, DNNs can only be applied to problems whose inputs and targets can be sensibly encoded with vectors of fixed dimensionality. It is a significant limitation, since many important problems are best expressed with sequences whose lengths are not known a priori. For example, speech recognition and machine translation.

 

Method

For the preprocessing of tweets, we worked with unsupervised techniques. For the encoding of the tweets we focused on Natural Language Processing, and used Word Embeddings for the representation of the words in the tweets. This way we can capture linguistic regularities found in our training sentences (festive tweets), placed close together in a high dimensional feature space. In our case the high dimensional feature space, varies between 200­ 500 dimensions. For the word embeddings we used google’s tools “word2vec”, which provides a fast and reliable implementation of two algorithms, continuous bag­of­words and continuous skip­gram. [6, 7, 8]

At the same time, using the same library we are able to extract and learn, phrases in our dataset of tweets. This way we are able to identify, ‘san francisco’ and encode it in a single vector, when otherwise, we would have ‘san’ and ‘francisco’ being represented as two vectors. Being an unsupervised method, the above needs a great amount of data, to train properly. The amount of data captured by the qualia api are many, but not quite enough. That said for the training of the word embeddings we use a large corpus, (first billion characters of the latest Wikipedia dump) in addition to the data provided for the qualia api.

To pass the tweets to the network we need to preprocess them, keeping in mind that we need an encoding of a given length/size. We do so by preprocessing the tweets in a RNN­RBM [1], a recurrent restricted boltzmann machine. The RNN­RBM is an energy­based model for density estimation of temporal sequences. This way we are able to maintain information about the temporal relations of words and phrases in tweets. We are also able to find, in the Hidden Layer of our recurrency, a representation of fixed length of our tweet. This representation we want to feed as the encoded version of our tweet together with any other aligned information we have for that event, from that user or at that time.

We hope, given the feature extraction capabilities of the networks, important features of the data will emerge. At the same time given the bidirectional nature of both mechanisms, we will be able to create exemplar objects of the important features extracted.

Given the amount of data needed and the fact that the system should be able to work in real time we also implemented a python api binded to the Qualia v1 api. Being able to get tweets and process them in a parallel fashion, this mechanism provides enough throughput for the algorithm running itself in a massively parallel fashion on GPUs, using Theano accelerated Python scripts.

 

  1. Boulanger­Lewandowski, N. (2012). Modeling temporal dependencies in high­dimensional sequences: Application to polyphonic music generation and transcription. arXiv Preprint arXiv: …, (Cd). Retrieved from http://arxiv.org/abs/1206.6392 2. Pak, A., & Paroubek, P. (2010). Twitter as a Corpus for Sentiment Analysis and Opinion Mining. LREC, 1320–1326. Retrieved from http://incc­tps.googlecode.com/svn/trunk/TPFinal/bibliografia/Pak and Paroubek (2010). Twitter as a Corpus for Sentiment Analysis and Opinion Mining.pdf
  1. LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998d). Gradient­based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.
  2. E. Hinton and R.R. Salakhutdinov, Reducing the Dimensionality of Data with Neural Networks​, Science, 28 July 2006, Vol. 313. no. 5786, pp. 504 ­ 507.
  1. Carreira­Perpinan, M., & Hinton, G. (2005). On contrastive divergence learning. … of the Tenth International Workshop on …, 0. Retrieved from http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:On+Contrastive+Diverg ence+Learning#0
  1. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient Estimation of Word Representations in Vector Space​. In Proceedings of Workshop at ICLR, 2013.
  2. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations of Words and Phrases and their Compositionality.​ In Proceedings of NIPS, 2013.
  1. Tomas Mikolov, Wen­tau Yih, and Geoffrey Zweig. Linguistic Regularities in Continuous Space Word Representations​. In Proceedings of NAACL HLT, 2013.
  2. Hochreiter, S. & Schmidhuber, J. Long short­term memory. Neural Comput. 9, 1735–1780 (1997).
  3. ElHihi, S. & Bengio, Y. Hierarchical recurrent neural networks for long­term dependencies. In Proc. Advances in Neural Information Processing Systems 8 http://papers.nips.cc/paper/1102­hierarchical­recurrent­neural­networks­for­long­term­de pendencies (1995).
  4. Sutskever, I. Training Recurrent Neural Networks. PhD thesis, Univ. Toronto (2012).
  5. Pascanu, R., Mikolov, T. & Bengio, Y. On the difficulty of training recurrent neural networks. In Proc. 30th International Conference on Machine Learning 1310– 1318 (2013).
  6. Sutskever, I., Martens, J. & Hinton, G. E. Generating text with recurrent neural networks. In Proc. 28th International Conference on Machine Learning 1017– 1024 (2011)

Chris Melidis.

Development blog PDF export:

Dr Eric Jensen, Associate Professor in the Department of Sociology at the University of Warwick is a widely published researcher in the field of public engagement and is the Principal Investigator leading this project. Other team-members include co-investigator Dr Maria Liakata, Assistant Professor at the Department of Computer Science at the University of Warwick, Professor Mike Phillips, researcher and i-DAT developer Chris Hunt, Chris Melidis and research consultant Dr David Ritchie, Professor of Communication at Portland State University.

https://warwick.ac.uk/fac/soc/sociology/staff/jensen/ericjensen/smile/workshop/

https://warwick.ac.uk/fac/soc/sociology/staff/jensen/ericjensen/smile/

FULLDOME UK

FULLDOME UK

FULLDOME UK supports the development and exhibition of ‘Fulldome art’, an emergent art form that embraces digital technologies and powerful immersive environments to push the boundaries of artistic practice. It exhibits ‘Fulldome’ productions byUK and International artists and facilitate a global network, connecting, supporting, developing and promoting Fulldome artists, programmers and researchers nationally and internationally.
Fulldome art works are multifarious consisting of linear and non-linear, produced or generative, interactive and performative experiences projected onto the ‘full’ domed surface traditionally found in planetaria to provide a unique and highly immersive audience experience that challenges established models of cinema and gallery spaces.

FULLDOME UK is produced in partnership with GaiaNova, The Computer Arts Society (CAS) and the National Space Centre (NSC) through NSC Creative.

FULLDOME UK is a not-for-profit association supporting artists and researchers working within Fulldome immersive environments.
It organise events with the goal to promote Fulldome as an artistic medium in its own right, and as a platform for research into data visualisation, group collaboration and the effects of immersive environments on our perceptual and cognitive processes.
FULLDOME UK 2014 took place at the National Space Centre in Leicester on the 7th & 8th November 2014.
FULLDOME UK 2012, took place at the National Space Centre in Leicester on the 16th & 17th November 2012.
FULLDOME UK 2011, took place at Thinktank, Birmingham Science Museum  on the 12th & 13th March 2011
FULLDOME UK 2010, took place at i-DAT, Immersive Vision Theatre in Plymouth on the 10th & 11th July 2010