MEDIASPACE

MEDIASPACE

 

A series of satellite TV and Journal publications. A networked telematic comic.

MEDIASPACE:
The intent of ‘MEDIASPACE’, whether in its ‘dead’ paper-based form, or the ‘live’ digital forms of satellite and internet, is to explore the implications of new media forms and emergent fields of digital practice in art and design. ‘MEDIASPACEÍ is an experimental publishing project that explores the integration of print (‘MEDIASPACE’ is published as part of the CADE (Computers in Art and Design Education) journal Digital Creativity by SWETS & ZEITLINGER.), WWW and interactive satellite transmissions (‘MEDIASPACE’ interactive satellite transmissions were funded by the European Space Agency (ESA), the British National Space Centre (BNSC), and WIRE (Why ISDN Resources in Education) and use Olympus, EUTELSAT and INTELSAT satellites via a TDS-4b satellite uplink. (incorporating live studio broadcasts, ISDN based video conferencing, and asynchronous email/ISDN tutorials). The convergence of these technologies generates a distributed digital ‘space’ (satellite footprint, studio space, screen space, WWW space, location/reception space, and the printed page). There is also a novel and dynamic set of relationships established between the presenters (studio based), the participant/audience (located across Europe), and the reader. As an electronic publishing experiment in real time (‘live’ media) delivery, combined with a backbone of pre-packaged information (‘dead’ media content), the ‘MEDIASPACE’ transmissions provide a provocative model for the convergence of ‘publishing’, ‘networked’, and ‘broadcast’ forms and technologies.

Some text and images recovered from the WIRE transmissions:

EADTU WIRE TRANSMISSIONS

European Association of Distance Learning Universities (EADTU). https://eadtu.eu/

 

Information for Reception Sites.

Test Transmission

22.10.96 11.00 – 12.00 GMT

Discovering Multimedia

05.11.96 12.00 – 13.00 GMT

Managing the Multimedia Process

03.12.96 12.00 – 13.00 GMT

Creating Multimedia

07.01.96 12.00 – 13.00 GMT

Virtual Environments

04.02.96 12.00 – 13.00 GMT

Multimedia and the Internet

04.03.96 12.00 – 13.00 GMT

“In a social situation, the number of people involved in a conversation or other symbolic exchange is the ‘space’ in which learning occurs.”

(Rom Harré et al, 1985)

 

 

 

“A boundary is not that at which something stops but, as the Greeks recognised, the boundary is that from which something begins its presencing. That is why the concept is that of horismos, that is, the horizon, the boundary. Space is in essence that for which room has been made, that which is let into its bounds. That for which room is made is always granted and hence is joined, that is gathered, by virtue of location, that is by such a thing as the bridge. Accordingly spaces received their being from locations and not from ‘space’.”

(Martin Heidegger, ‘Building Dwelling and Thinking’. 1954)

 

 

 

 

 

The network diagrams below try to illustrate the expected interactions and structures to be explored through out the transmissions. The physical structures have been broken down into the transient and the static.

 

Transient:

Green disks represent the reception sites;

Purple balls the ISDN asynchronous tutorials;

Red disks the studio transmissions.




Static:

Blue disks the permanent sites of the World Wide Web site and the First Class server.


 

Of course, the reception sites and the studio are static, however for the purposes of the MEDIASPACE they only exist for the duration of the transmission.

“There is not only this sharpening and refinement of the brain going on, but there has been what our great grandparents would have considered an immense increase in the amount, the quality and the accessibility of knowledge. As the individual brain quickens and becomes more skilful, there also appears a collective brain, the encyclopaedia, the Fundamental Knowledge System, which accumulates, sorts, keeps in order and renders available everything that is known.”

(The Shape of Things to Come, H.G. Wells, 1933)

 

 

 

A train ticket represents a journey from A to B whilst at the same time representing a location (the train seat or carriage) for a duration (the time of the journey from beginning to end). In a similar way the MEDIASPACE transmissions are a broadcast from A to B, whilst at the same time evoking an extended location for the duration of each broadcast (stretching out, through networks, across the boundaries defining the space of each site), a new kind of space, hopefully a landscape of interactivity.

The Studio:

The Hoe T.V. Studio is located about a mile from the Satellite Uplink and is linked by a direct land line. The Satellite Uplink is based at the main University of Plymouth Campus. The studio has been producing educational and professional programmes for a number of years and is equipped with professional broadcasting and editing facilities.

The MEDIASPACE Transmissions regularly use chromakey to provide back drops for the presenters.

Video output is taken from the workstations on the studio floor and mixed with the camera feed. This allows the presenters to interact directly with the computer screen.

 

 

 

Each programme usually uses 3 cameras, one in a fixed position for the majority of the chromakey shots and the two others roaming around the studio.

Production team: Will Bakali / Kate Bryant Mic Cady / Geoff Cox / Culver Epps / Dave Flynn / Debbie Goacher / Dan Livingstone / Rob Morrison (SGI) / Jill Mortimer / Joe Nash / James Norwood / Mike Phillips / Chris Speed / Adrian Vranch

Guests: Roy Ascott (CAiiA) / Brian Eno / Andy Finney / Tony Tucker (Macromedia) / Simon Turley / 


EURONET Broadcasts (circa 1999).

 

“Wire – Mediaspace”: Converging technologies for distance learning in multimedia

A. Vranch, M. Phillips and R. Winders.

Computing Service, University of Plymouth, Seale-Hayne Campus, Newton Abbot, Devon, TQ12 6NQ, UK.
Paper prsented at the CAL 97 Conference “Superhighways, Super CAL, Super Learning?” University of Exeter 23rd – 26th March 1997

Keywords: Satellite, IT and learning, Multimedia, Rural, developing and remote areas, Video conferencing

Abstract

The University of Plymouth is developing an expertise and infrastructure for effective delivery of communications, information and learning at a distance. This strategy fits within a regional framework comprising seven main campuses, separated by some 200 km at the extremes, close links with local Partner Colleges and a lead role in RATIO, an initiative to set up a regional network of telematics centres for distance learning. “Wire” (Why ISDN Resources in Education?) is a three year European project offering five distance courses to ten Euro Study Centres. The University of Plymouth contribution to Wire is “Mediaspace”, a multimedia course comprising a series of live, interactive satellite TV transmissions, with supporting sessions, using ISDN video conferencing, remote control, file transfer, FirstClass electronic conferencing and WWW server access. This paper describes the development, delivery and evaluation of the Wire – Mediaspace course in terms of identifying combinations of converged technologies that provide effective distance learning. This project has implications in terms of technology transfer of experience and evaluation techniques to developments in regional and inter-regional distance learning courses. In a situation of rapid technological change the approach and outcomes of Wire – Mediaspace are expected to have an impact on delivery of effective distance learning for some time to come.

Introduction

The University of Plymouth and the South West region

The University of Plymouth is situated in the South West of England, distributed over seven main campuses in the counties of Devon, Cornwall and Somerset, separated by some 200 km at the extremes. In addition, the University has developed close links with local Partner Colleges of Further and Higher Education and is the lead partner in the RATIO (Rural Area Training and Information Opportunities) project, an initiative to set up a network telematics centres in the region for distance education and training. Within this regional framework the university is developing an expertise and infrastructure for effective delivery of communications, information and learning at a distance.

The “Wire – Mediaspace” course

“Wire” (Why ISDN Resources in Education?) is a European project to evaluate the effectiveness and costs of using ISDN for the delivery of distance learning courses. The project is funded and managed by the European Association of Distance Teaching Universities in Heerlen, The Netherlands as part of the European Open University Network. Partners from Finland, Belgium, UK, France and The Netherlands contribute to the project as course providers and course recipients through their Euro Study Centres, with support from telecommunications companies.

Wire is a three year project offering five distance courses to ten Euro Study Centres. The University of Plymouth contribution to Wire is “Mediaspace”, a series of live, interactive satellite TV transmissions, with supporting sessions, using ISDN video conferencing, remote control, file transfer, FirstClass electronic conferencing and WWW server access. The strategic importance of Wire – Mediaspace The current commitment for distance delivery of courses by the University is to over sixty telematics reception sites in the south west region, including campuses of the University, Partner Colleges and RATIO training centres. The Wire – Mediaspace project is a key contributor in identifying appropriate combinations of converged technologies that deliver effective, viable learning at a distance.

Development of techniques to integrate live satellite TV with other converging technologies for distance learning

Early live satellite TV transmissions with audio participation

The roots of the current Mediaspace series in the Wire project go back several years to the early trials at Plymouth that explored the potential for distance learning via live interactive TV broadcasts. In the early years of development the interaction with the viewers was achieved mainly using audio conferencing. In the early series of “StarNet” broadcasts the format of programmes was based on a professional presenter, with studio guests who were the business studies subject specialists. Interaction with viewers was by telephone and the presenters used standard handsets to talk to viewers, who dialled in via an audio bridge. The handsets in the studio were later replaced by studio microphones and speakers which made interaction with viewers appear more natural on screen, a key factor in enhancing viewer acceptance of the approach, identified via questionnaire responses.

Early experiences with audio conferencing in live transmissions identified technical problems, including a form of audio feedback called “howlround” and the general need for a specially trained operator to be available at the receive site. From this was developed the concept of the “facilitator”, a trained member of staff who was able to ensure that the satellite receiver was tuned in properly and who could oversee the use of audio conferencing equipment. This was important to provide an environment in which the viewer would feel relaxed and willing to participate, again a key factor in enhancing viewer acceptance.

Early integration of computers into live satellite TV transmissions

As the broadcasts in StarNet and other series progressed an interest developed in using computers to enhance the programmes. Graphics for presentation and electronic communication for interaction were identified as two key functions for the use of computers in this way. For example, a low-cost 286 PC computer was used for running an animation in a 1990 programme on computer viruses. In the “Solstice” series, interaction with electronic mail was explored using simple X25 communications into the studio and this helped to provide more feedback with viewers, although mail messages were not always delivered promptly. In the Solstice series librarians were trained at a distance in the use of computer-based library search techniques (Hughes, 1991).

Development of converged technologies for live satellite TV transmissions

Following these initial pilot broadcasts, techniques for using computers in live satellite programmes have been under constant review, in line with the appearance of new developments in the TV, computer and communications industries. Use of computers for presentations was developed further in the IT Training by Satellite series (Vranch, 1993) in which a Macintosh screen output was interfaced to the control room via a Mediator to provide quality computer images for broadcast in a combination of presentations of “bullet points” and animations and for live demonstration of specific software. A novel aspect of this approach was that for the first time it was the presenter who was in control of sophisticated computer-based presentation and graphics input to the broadcast, rather than the director in the control room. This innovation opened up the possibilities for a different, less formal approach to presenting live programmes that is now used in the Wire – Mediaspace series. It also demonstrated the analogy between using a computer to present a live TV programme and delivering a lecture in a modern auditorium. Early versions of Mediaspace were developed as single TV broadcasts using analogue video conferencing and X25 electronic mail on a much smaller scale than Wire – Mediaspace (Honeywill et al., 1995).

Wider availability of ISDN video conferencing has enhanced the potential for interaction with guest presenters and questions from viewers, both of which added much to the content and style of live transmissions. This development was also enabled the presenter in the studio to get visual feedback from the viewers at a distance, again improving interaction and making live TV presentations analogous to lecturing in a large, modern lecture theatre.

Course content, interaction and evaluation

Interactive TV transmissions

Details of topics covered in each transmission of this multimedia course are given in appendix 1. Technical details of transmissions are given in Appendix 2. Interaction is provided in the live TV sessions by ISDN video conferencing and FirstClass electronic conferencing, directly into the studio floor (figure 1). Additional input from viewers is made possible by telephone or facsimile to the control room and this input can be relayed immediately to the presenters. The FirstClass system provides a flexible means to obtain live feedback from viewers unobtrusively and enables efficient management of dial-in for ISDN video conferences. Video conferences are used to bring guests “into the studio” at a distance and to interact with viewers for comments and questions. In addition, the FirstClass electronic conferencing system enables an interactive dialogue between all participants before, during and after the transmissions.

The presenters control the delivery of the programme content from the studio floor as computer presentations and this output is integrated in the control room mixer with video conference PC output, CRO “back projection” technology, audio and camera video outputs, to provide the transmitted signal. The studio director is in voice contact with the presenters and studio staff at all times.

Interactive support sessions

Interaction in support sessions is provided in both synchronous and asynchronous form. Interactive ISDN video conference sessions, with associated remote control via Timbuktu and ISDN file transfers enable student peer to peer and student/tutor interaction in learning and facilitate distance collaboration in multimedia projects. Asynchronous interaction is on-going via the FirstClass conferences, following up ideas generated in previous TV transmissions, adding content during the programmes and providing topics for interaction in the next TV programme or in the next ISDN video conference session. The FirstClass conferences provide an on-going record of the progress of the support sessions in an interactive dialogue and a forum for sharing files as attachments. A WWW server adds a further dimension to this sharing process.

This record is an important resource which can be revisited and built on as the course progresses and can be used later in conjunction with video tapes of the live programme by new groups of students.

Evaluation

Evaluation of Mediaspace is on two levels. First, the course is evaluated in the context of the Wire project in terms of the effectiveness of the combination of technologies used with ISDN compared with other approaches. Second, there is an evaluation of the approach adopted to identify its success in delivering effective distance learning specifically for multimedia as a course topic.

In both cases evaluation is made from a combination of questionnaires and analysis of interaction from video tapes of live programmes, logs of video conference sessions and from analysis of the FirstClass conferences throughout the course. Specific software has been developed to extract key information from the FirstClass conference log files, in addition to manual analysis of the content of conference threads and the comments within.

Implications for future distance learning developments

The Wire – Mediaspace project gives rise to specific outcomes which have implications for future distance learning work, including:

  1. further consolidation of experience and expertise in identifying the balance needed in technologies for effective distance learning;
  2. development of methods for evaluation of distance learning, based on the use of converged technologies;
  3. the technology transfer of this research-based approach towards application within mainstream activity, for example the University of Plymouth Institute of Health Studies learning programme, RATIO courses and other regional or inter-regional activities.

In addition, as technologies available for delivery of distance learning materials develop, so the present approach of evaluating solutions for effective convergence of technologies becomes more relevant. For example, development of high bandwidth regional, metropolitan or international networks can offer an alternative to satellite for delivery of broadcast quality video, although there may be limitations for mass, simultaneous broadcast over a large area using these terrestrial networks. Nevertheless, principles and evaluation procedures developed in Wire – Mediaspace will still be applicable to these developments.

The use of the satellite data carrier signal, for data broadcast at 100 Kbps at the same time as TV and video, offers new opportunities for mass, simultaneous distribution of learning materials. Furthermore, moves to transmit live TV programmes using MPEG2 digital compression are now taking place from the University of Plymouth uplink, with the cost advantage of reduced satellite rental charges, since only one-eighth (Glover, 1996) of the satellite transponder is required.

Even in a situation of rapid technological change the outcomes of Wire – Mediaspace are expected to have an impact on delivery of effective distance learning for some time to come.

References

Glover, P. (1996) Interactive Digital Television by Satellite, Proceedings of On-line Educa, Berlin, 13 – 15 November

Honeywill, P, Phillips, M and Vranch, A. (1995) Converging New Technologies for Art and Design Education. Proceedings of Digital Creativity, the First Conference on Computers in Art and Design Education (CADE ’95), Brighton, UK, April.

Hughes, A (1991) Project SOLSTICE. ASLIB Information, 19, 11 & 12.

Vranch, A. T. (1993) Staff Development and Training by Satellite in the University of Plymouth, Proceedings of Olympus Utilisation Conference, Sevilla, Spain, 20 to 22, ESA WPP-60, 247 to 252.

Acknowledgements This work was funded by the European Association of Distance Teaching Universities, with matched funding from the University of Plymouth. This funding support is gratefully acknowledged.

This document was added to the Education-line database 15 October 1998

 

 

autoicon

autoicon

It’s as though he’s alert to the current debate in the House of Commons chamber. We briefly discuss love, pain and flowers and when I ask him about the internet, he amusingly replies, “Excuse me?” Even with its Y2K interface, Autoicon is a technological wonder. It doesn’t just imagine black people in the future, it preserves them so that they arrive there safely and in their own image.

(2020). ‘Union jack swastikas and space-age braids: Thirteen Ways of Looking – review’, The Guardian, 28 October 2020. Available at: https://www.theguardian.com/artanddesign/2020/oct/28/thirteen-ways-of-looking-review-cultural-identity-herbert-art-gallery-coventry (Accessed: 16 January 2021).

Autoicon is a dynamic internet work and CD-ROM that simulates both the physical presence and elements of the creative personality of the artist Donald Rodney who died from sickle-cell anaemia. The project builds on Donald Rodney’s artistic practice in his later years, when he increasingly began to delegate key roles in the organisation and production of his artwork. Making reference to this working process, AUTOICON is developed by a close group of friends and artists (his partner Diane Symons, Eddie Chambers, Richard Hylton, Virginia Nimarkoh, and Keith Piper) (ironically described as ‘Donald Rodney plc’ who have acted as an advisory and editorial board in the artist’s absence, and who specified the rules by which the ‘automated’ aspects of the project operate. http://www.iniva.org/autoicon/

web
12Capture3

Backstory:

“I can remember the exact location (though the time is vague – early afternoon of 1997) that it became apparent that the body politic of a white middling (class/age) male would be of little interest. On a stretch of road, half way up the A38, just outside Buckfastleigh, frequently travelled with Donald in an ageing 2CV back in 1990, enroute to install Visceral Canker for the TSW 4 Cities Project. Bizarrely I had recently enacted a personal interpretation of this collaboration through my auto-exsanguination following an ulcer burst a year earlier, and this temporality of the body was a factor in the birth of Autoicon. Memories of all that blood (emerging into the twilight from a Cornish gun emplacement, bloodied, damp and freshly electrocuted) conjured up half forgotten conversations with Donald. In particular the one about the ticking clock that marks the passage of time for all sufferers of Sickle-Cell Anaemia – something about a life expectancy of 36/37 years. I had worked with (or like many of the Donald Rodney PLC – for) Donald on many occasions since our time at the Slade School of Art (1985-87). Back then it was the soundscapes for his installation at the ICA and the donation of my work space, drawings and BBC B I/O boards for the cameraman to walk all over when filming his feature on the State of the Art  TV programme.  There were other bits and pieces but most notably Psalms (with Guido Bugmann). Autoicon would seem to consolidate that period of time, it embraced our many conversations of parallel generic histories sitting in front of Bentham’s glassy eyes, gave new meaning to a history of medical data and, like the original Autoicon, engaged playfully with an inevitability. The mid 90’s were the age of the wannabe avatar, VRML showed such promise and cyberspace was almost tangible. To be in it, part of it, breathing the data, was an ambition being played out in media arts projects all over the world. Yet there was something ghostly and hollow about these apparitions, they were, for the most part just pixels, vague representations that could neither feel nor be felt. My hope was that, through Donald’s body of work and body politic the Autoicon marked an avatar upgrade that was both a homage to Bentham’s original and as rich and complex as his vision for his physical body. A 20th Century Autoicon would have to be embrace the flesh as much as it would the trace data that leached from the temporality of its owner. My work has since, from time to time, attempted to explore the relationship between meat and data, A Mote it is…, Exposure and the various projects in Bio-OS have all attempted to capture the physical, projected and data body beyond the simple two dimensional representation. But the Autoicon needed to be Donald Rodney – and so we set about making plans from his hospital bed for an Arts Council England GFA application…”
Mike Phillips, 10/10/14.

iniva

Donald Rodney Autoicon CD:
cd
outsidecd
insidecd
backcd
grab2
grab3
grab1

 

donald.rodney:autoicon v1.0

http://www.iniva.org/autoicon

README AUTOICON is a dynamic artwork that simulates both the physical presence and elements of the creative personality of the artist Donald Rodney who – after initiating the project – died from sickle-cell anaemia in March 1998. It builds on Donald Rodney’s artistic practice in his later years, when he increasingly began to delegate key roles in the organisation and production of his artwork.
Making reference to this working process, AUTOICON is developed by a close group of friends and artists (ironically described as ‘Donald Rodney plc’) who have acted as an advisory and editorial board in the artist’s absence, and who specified the rules by which the automated aspects of the project operate.
This CD-ROM, in parallel to the internet version (http://www.iniva.org/autoicon), is automated by programmed rule-sets and works to continually maintain creative output. Users will encounter a ‘live’ presence through a ‘body’ of data (which refers to the mass of medical data produced on the human body), be able to engage in simulated dialogue (derived from interviews and memories), and in turn affect an auto-generative montage-machine that assembles images collected from the user’s hard-drive (rather like a sketchbook of ideas in flux).

Through AUTOICON, participants can generate new work in the spirit of Donald’s art practice as well as offer a challenge to and critique the idea of monolithic creativity. In this way, the project draws attention to current ideas around human-machine assemblages, dis-embodied exchange and deferred authorship – and raises timely questions over digital creativity, ethics and memorial.
Further information on the artist and his work is included on the CD-ROM.
For more information email autoicon@iniva.org

INSTRUCTIONS
This software requires no specific installation – it is designed to run straight off the CD-Rom. However, it does require a working installation of Apple’s QuickTime 4. The installer is provided on the CD-Rom in a folder called ‘QuickTime 4’. Please ensure you have fully installed QuickTime 4 before trying to use AUTOICON.
To launch the AUTOICON software, insert the CD into your CD-Rom drive. Navigate to the files on the CD-Rom (Macintosh users should double click the icon that appears on your desktop, Windows users should browse inside ‘My Computer’). Double-click the ‘AUTOICON’ application icon.

The software takes a few moments to start up. Once loaded, click the ‘Continue’ button to launch the software. Interact with the AUTOICON by typing text into the field in the centre of the screen. While you engage in a discussion with the AUTOICON, a montage will be derived from bits of images found on your own hard-drive.
The ‘Activities’ menu allows you to view other aspects of the project, including a slideshow and artist biography. You can also choose to watch the montage in progress, or you can examine the software’s internal memory. External links to the AUTOICON web site and for email feedback are also provided.

If you would like to export the generated montage, choose ‘Save As…’ from the ‘File’ menu.
To leave the AUTOICON, go to the ‘File’ menu, and choose ‘Quit’.

MINIMUM SYSTEM REQUIREMENTS
Macintosh:
Power PC Processor (G3/266 pref.)
System 8.0 or later
32mb RAM
Quad-speed CD-Rom Drive
QuickTime 4 (installer provided)
Windows:
Pentium P200 Processor or higher
Windows 95, 98, NT or 2000
32mb RAM
Quad-speed CD-Rom Drive
QuickTime 4 (installer provided)

CREDITS
Software written by Adrian Ward (adrian@signwave.co.uk). Produced by Geoff Cox & Mike Phillips; STAR (Science Technology Arts Research, University of Plymouth), inIVA (Institute of International Visual Arts) and Signwave, with support from the Arts Council of England (New Media Fund). With contributions from Eddie Chambers, Richard Hylton, Angelika Koechert, Virginia Nimarkoh, Keith Piper, Gary Stewart & Diane Symons. Media courtesy of the artist’s estate and Black Audio Film Collective. Thanks to Pete Everett, Dave Grogono, Ruth Kelly, Paul Khera and Elliot Lewis.
Project © Copyright 2000 STAR, inIVA, Signwave & the estate of Donald Rodney.
Autoicon engine © Copyright 2000 Signwave and its licensors.
HTMLField component © Copyright 1998-2000 Doug Holton.
Media © Copyright 1999-2000 various sources.
QuickTime and the QuickTime logo are trademarks used under license.

Published by the Institute of International Visual Arts (inIVA) and STAR.
inIVA,
6-8 Standard Place,
Rivington Street,
London, EC2A 3BE, UK.
Internet: http://www.iniva.org
Tel: +44 20 7729 9616
Fax: +44 20 7729 9509
STAR (Science Technology Art Research)
School of Computing,  University of Plymouth,
Drake Circus, Plymouth, PL4 8AA, UK
Internet: http://www.CAiiA-STAR.net
Tel: +44 1752 232541
Fax: +44 1752 232540

Donald Rodney Autoicon Launch at inIVA:
Invitation:
envelope
envelopeback
xrayinvite

inIVA Launch.
Still not sure what the urination was about. The Dirty Space? Something may have got lost in translation.
DSCF0004 DSCF0005 DSCF0006 DSCF0007 DSCF0008 DSCF0009 DSCF0010 DSCF0011 DSCF0012 DSCF0013

References:
Phillips, M., Cox, G. ‘Donald Rodney – Autoicon. The Death of an Artist’. In EMERGENT FUTURES, Art, Interactivity and New Media / FUTUROS EMERGENTES , Arte Interactividad y Nuevos Medios”. Eds Molina, A. and Landa, K.. ISBN 84-7822-326-6. Alfons el Magnanim. 2000, Valencia.
Phillips, M. Memoria Technica, Donald G Rodney Autoicon. Mediaspace 5,  
‘Autoicon’ CD-Rom & website (collective), exhibition to accompany conference, ‘Race and Digital Space’, Massachusetts Institute of Technology, USA. April 2001. http://cms.mit.edu/race/
‘Autoicon’ CD-Rom (collective), part of exhibition, ‘Art In Motion II’, Santa Monica Museum of Art, Santa Monica CA, USA. 2001.

Spectactor

Spectactor

Date: 22/05/2000.
This is an experimental project that uses interactive digital media to enable the development of new forms of performance and narrative. It looks at the way that layers of interactivity are created in a live event. It also explores how human interaction is moderated through different forms of technology. In this project the performance operates over a series of inter-connected spaces. These will be created, using three technologies: satellite transmission, VRML and email. The performance will ‘take place’ in a T.V. studio; the performance will be transmitted by satellite to other receiving venues which can communicate with the performers by ISDN video conferencing. Spectactor is funded by DA2 and i-DAT.

Phillips, M., Ride, P., Lavelle, M. Turley, S. The Spectactor Project: A Multi-user narrative in Mediaspace .
palmeres for to seken straunge strondes.  Consciousness Reframed III.
Spactactor Mediaspace 9.

STI – Search for Terrestrial Intelligence

STI – Search for Terrestrial Intelligence

S.T.I. is funded by the SciArt programme (supported by the ACE, the British Council, the Calouste Gulbenkian Foundation, SAC, the Wellcome Trust and NESTA)., and turns the technologies that look to deep space for Alien Intelligence back onto Planet Earth in a quest for ‘evidence’ of Terrestrial Intelligence. Looking at Earth from space the project will develop processing techniques using autonomous computer software agents. S.T.I. moves beyond irony by engaging with our understanding of the ‘real world’ through our senses, whether real or artificially enhanced. Will these autonomous systems ‘know’ the ‘truth’ when they ‘see’ it? The S.T.I. Consortium: STAR, Dr Guido Bugmann, Dr Angelo Cangelosi, Laurent Mignonneau, Christa Sommerer, Dr Nick Veck.
sti1sti3
sti2
The STI Server is no longer alive but video grabs can be found here:

Documentation:

landscap_trauma_cover_600_760
T H E  S . T . I .  P R O J E C T :  T H E  S E A R C H  F O R  T E R R E S T R I A L  I N T E L L I G E N C E
INTRODUCTION: PROJECT: CONSORTIUM: PATHWAY: OUTPUT: COPYRIGHT:
INTRODUCTION:
The Search For Terrestrial Intelligence is funded through an R&D grant awarded to the Consortium by the Wellcome Trust SciArt competition http://www.wellcome.ac.uk/sciart. The sciart competition aims to encourage scientists and artists to work together creatively. Awards are offered to partnerships of scientists and artists working on projects that capture the public’s imagination with some aspect of biology, medicine or health. The competition has been run twice, in 1997 and 1998.
S.T.I. turns the technologies that look to deep space for Alien Intelligence back onto Planet Earth in a quest for ‘evidence’ of Terrestrial Intelligence. Using satellite imaging and remote sensing techniques S.T.I. will scour the Planet Earth using similar processes employed by SETI (the Search for Extra Terrestrial Intelligence). Looking at Earth from space the project will develop processing techniques using autonomous computer software agents. In their search for evidence of intelligence the agents will generate new images, animations and audio (which may produce more questions than answers) which will be publicly accessible on this website.

From the original Wellcome ArtSci Pitch:
PROJECT:
S.T.I. establishes a common ground for the consortium by sharing the collective knowledge of remote sensing, imaging technologies, autonomous agents (AI and Neural Networks), and On-Line interaction. The Project fuses this knowledge into a challenging exploration of planetary data analysis, through a process of experimental prototyping of a number of autonomous data analysis agents that will reside on this website.
Vision dominates our culture and lies at the heart of scientific and artistic endeavour for truth and knowledge. Increasingly the dominance of the human eye is being challenged by a new generation of technologies that do our seeing for us. These technologies raise critical questions about the nature of the truth and knowledge they illicit, and the way in which we interpret them. In turn these questions raise issues about the way we, through science and art, have always ‘known’ the world. The S.T.I. Project goes beyond the irony of the search for terrestrial intelligence on Earth by engaging with our understanding of the ‘real world’ through our senses, whether real or artificially enhanced. Will these autonomous systems ‘know’ the ‘truth’ when they ‘see’ it?
The S.T.I. Project reveals the processes used by science to ‘see’ the ‘real world’, making transparent the scientific method itself. In so doing S.T.I. generates ‘artefacts’ that question the way we perceive our environment and ourselves. This process of imaging says as much about the observer, the nature of the experiment and the technology as it does about the actual data gathered. This link between knowledge and vision, knowing and seeing, questions the way art and science utilise the visual dialectics of truth and deception.
The S.T.I. Project engages in critical issues surrounding the shift from the hegemony of the eye to the reliance on autonomous systems to do our seeing for us. This shift has an equal impact on scientific processes and creative endeavour. By turning away from ‘outer space’ to an examination of ‘our space’ the project also engages public interest, as expressed in the popular imagination through science fiction (X files, etc), in the alien within our midst. Do we recognise ourselves when seen through our artificial eyes.
For example: ‘Face on Mars’. The blurred and faded images sent back by the 1976 Viking Orbiter reveal little to the naked eye, until they are digitally processed. The processing slowly reveals a skull like face that stares blankly from the surface of mars. The technology strips away the grain and fuzz and re-visions. The ‘face’ becomes gradually un-obscured, progressively un-veiled, with features suggestive of eyes, a ridge-like nose, and a mouth, its ‘truth’ emerging through the technology. Maybe the processing techniques employed allow us to see more clearly the images we nurture inside our heads. Maybe they bring into sharp focus the things we want to see.

CONSORTIUM:
The S.T.I. Project Consortium brings together artists, scientists and technologists from four research groups (STAR, CNAS, ATR, NRSC) based in three organisations, the University of Plymouth, ATR Media Integration & Communications, Research Laboratories, and the National Remote Sensing Centre (NRSC) Limited. The S.T.I. Project involves a Development Committee, which consists of eight individuals, short C.V.s are included in the supporting information section of this application. They are:

PATHWAY:
The research and development of the S.T.I. project is broken down into three stages. The nature of the project requires an exploratory and prototyping method of systems design. Although there is no recognised ‘best practice’ critical pathway, STAR has identified a system, which is based on EMG’s production pathway. Many of these activities will run concurrently.

R&D Pathway: May 2000-March 2001:
Phase 1: Concept and Research. This will consist of a design process, which identifies/assesses the nature of: the information currently available from remote sensing technologies; processing techniques currently employed for the analysis of remote sensing data; rules and processes that can be employed to ‘train’ the autonomous systems; design guidelines for the production of the autonomous analysis systems.
Much of this knowledge exists within the S.T.I. consortium, its dissemination between the committee will take place through meeting (IRL and On-Line).
Phase 2: Prototyping. This will require the consortium and the production assistant to generate autonomous systems.
Phase 3: Website design and production.
The completion of the R&D stage will be formalised by a S.T.I. Project Seminar/Launch, which will provide a public presentation of the projects findings and activities.

Signs from the S.T.I. Database:

GM (Generative Music)

GM (Generative Music)

GM:

i-DAT committed several Generative Music events. These consisted of live coding, digital sound toys (mostly produced by students on MLA and audio montages.

Locations:

Sherwell Centre, Plymouth / The Cavern, Exeter / The Cube, Bristol (as part of the RX Artec/DA2 event) / Lovebytes Sheffield.

Cavern Club, Exeter, 16/06/1999

These images and recordings were from the Cavern Club, Exeter on June 16 1999… and people were dancing…

Performers included: Ade Ward (AKA Slub) / James Norwood / Mike Phillips / Dan Livingstone / Geoff Cox /.

Digital Originals (2000) Lovebytes festival Showroom Cinema, Sheffield (6–8 April). 

Installers include: Geoff Cox / James Norwood / Chris Speed /.

Live recordings from:  Sherwell Centre, Plymouth / The Cavern, Exeter / The Cube, Bristol /

[but not sure which]

GM#1: [17 minutes – 14 seconds]

[Probably The Cube]

GM#2: [41 minutes – 36 seconds] –

[think mainly Slub…]

INTERSTICES

INTERSTICES
Page content salvaged from the original CAiiA-STAR server & unpublished CD-ROM (beta available below):

[SPACE/TIME: 24 and 25 August 1998, at Port Eliot House in St Germans, Cornwall]

INTRODUCTION:

CAiiA-STAR invites you to participate in the first INTERSTICES Symposium,’The Architecture of Consciousness’ to be held on 24 and 25 August 1998, at Port Eliot House in St Germans, Cornwall.

Following the CAiiA Conference at UWCN, a Symposium on new developments in interactive design and cyber architecture, set in the context of Consciousness Reframed, will be organised by STAR at the University of Plymouth in St Germans at the historic Port Eliot House. The Symposium will explore the impact of digital technologies on the institutions, locations and structures of cultural production and consumption, and will cover such topics as: new media arts and research centres, educational structures, interactive publishing/broadcasting, intelligent environments and public interfaces.

The Symposium will give participants the opportunity to share and discuss the institutional and architectural implications of the issues raised at the CAiiA Conference. The STAR Symposium will be largely visually-based, utilising interactive multimedia systems (CD ROM, video, projection and the Internet), offering the opportunity to construct, present and view new developments and proposals in a number of cultural, educational and technological contexts from around the world.

The Seminar will take place on 24 and 25 August (a Symposium meal will be held at Port Eliot House on the evening of Sunday 23). Transportation from the CAiiA Conference site at the University of Wales, Newport to the STAR Symposium accommodation at the University of Plymouth’s Robbins Conference Centre will be arranged for participants. Inexpensive individual en-suite accommodation will be available at the Centre.

 

STRUCTURE:

INTERSTICES uses a simple structure to map the Architecture of Consciousness. The initial briefing of participants presents an ’empty’ computer generated model. This model will be developed by the participants during the Symposium. The interactive 3D model is navigable and capable of containing a variety of interactive multimedia components (writing, drawings, diagrams, photographic images, text audio, video, etc.) These assets will be collected by a production team through an intimate process of documentation.
The body of the Symposium will be subdivided into 3 groups each with a theme, ‘MEMORY’: ‘ACTION’: ‘DREAMING’. These groups will then define a series of questions that will be used to map the Architecture of Consciousness. The process of documentation will continue over the period and will allow participants (with the help of the production team) to edit and assemble elements of the model. These elements may include the ‘COORDINATES’ brought by the participants.
During the proceedings a production team will assemble the assets mapping out the landscape. Towards the end of the Symposium the groups will recombine to view the assembled ‘map’ of the Architecture of Consciousness. The intention is that this ‘map’ will ultimately be used to produce a number of publications (paper-based, CD ROM, and WWW). These productions will be made available to the participants following the event.

FORMULATION: The formulation of questions during the initial briefing will be developed through the three groups:

MEMORY:

will explore the past, architecture remembered, half forgotten places, plans, recordings, annotations, documenta…

ACTION:

program and production, proposals, buildings (real and virtual), constructions, developments, activity, communication…

DREAMING:

the future…

The groups will have plenty of opportunity to cross reference and reform during the sessions and related activities.

DOCUMENTATION: A small production team will circulate through the three groups documenting the discussions and activities.

PRODUCTION: The production team will assemble the collected multimedia assets into the model.

 

 

 

COORDINATES:

In order to provide individual starting points for debate and production within the themed groups, each participant is requested to bring a digital ‘element’. A set of coordinates are given to provide an overall coherence to these contributions.

The coordinates are:

X AXIS: 640 pixels.
Y AXIS: 480 pixels.
Z AXIS: – ° to + ° (optional).
t AXIS: 120 seconds.

 

 

In other words standard 640 by 480 screen size (or PAL/NTSC video), optional Z axis for 3D models (VRML, 3DMF, Open GL), 120 seconds long (these coordinates may be repeated).

These elements may be generated from any form (analogue and/or digital), such as diagrams, sketches, animations, video, film, text, audio, etc, but must be available in a digital form suitable for screening from a Macintosh G3 Power PC at 640×480 resolution, or on PAL or NTSC SVHS/VHS format.

Each element provided by the participants will be incorporated into a show reel on site by the production team. These elements will be screened on Monday 24 August during the late evening Symposium party in the grounds of Port Eliot House, the elements will also be incorporated into the productions generated by the Symposium.

THEMES:

In order to map out the Interstices of the Architecture of Consciousness, past, present and future, three themes have been identified as offering suitably rich starting points for discussion. Each theme will occupy a third of the participants for the majority of the Symposium. Each group will be subdivided into four groups of five for focused discussion and production.

 

 

DREAMING:

“…series of pictures or events in mind of sleeping person; day-dream, fantasy; ideal or aspiration; beautiful or ideal person or thing; experience dream; imagine as in a dream; think of as a possibility; ideal or imaginary land; imagine, invent…”

ACTION:

“…process of doing or performing, exertion of energy or influence (demand for action; action of acid on metal); thing done (useful actions); series of events in drama, sl. events going on around one (a slice of the action); mechanism of instrument; mode or style of movement of an animal or human…”

MEMORY:

“…faculty by which things are recalled to or kept in the mind; this is an individual (a good memory); recovery of one’s knowledge by mental effort; act of remembering; person or thing remembered; posthumous repute (of happy memory); length of time over which memory extends (within living memory); store for data in computer; in memory to keep alive the remembrance of…”

(OED)

 

INTERSTICES CD-ROM:

So close – so far away. The INTERSTICES CD-ROM was authored in Macromedia Director but time and money ran out before the it was distributable. Here you can see glimpses of the interface and content along with the beta version which can be downloaded here INTERSTICES CD-ROM BETA. The .exe runs in Windows only. Lots of bugs and QTVR (mainly in the Dreaming section) is no longer supported on any contemporary platform.

Video clips from the CD-ROM:

 

TIMETABLE:

Sunday 23 August:

1.00pm: Transport from the Consciousness Reframed Conference in Caerleon.

3.30pm: Arrive at the Robbins Conference Centre at the University of Plymouth.

6.00pm: Transport to St Germans.

6.20pm : Arrive at Port Eliot House.

7.00pm: Welcome and introduction to Port Eliot House by Lord St Germans. Introduction to Interstices by Professor Roy Ascott

7.30pm: Symposium Meal. Following the meal participants will be invited to tour the House and the Port Eliot Estate.

10.30pm: Return to the Robbins Conference Centre at the University of Plymouth.

Monday 24 August:

9.30am: Transport to St Germans.

10.00am: ‘Mapping the Territory’, Symposium briefing and presentation of the ‘model’ and the themes. Symposium body divides into themed groups.

1.00pm: Lunch.

2.30pm: Themed groups divide into workshops (groups of five) for focused debate and production.

5.00pm: Tea Break.

6.00pm: Workshop.

8.00pm: Dinner followed by Symposium Party and screenings of individual ‘coordinates’.

Midnight: Return to the Robbins Conference Centre at the University of Plymouth.

Tuesday 25 August:

10.00am: Transport to St Germans.

10.30am: Workshops.

1.00pm: Lunch.

2.30pm: Mixed Workshops

4.00pm: Reassemble themed groups to coordinate elements for inclusion in the ‘Model’.

5.00pm: Construction of ‘Model’ and final presentation.

6.00pm: Buffet meal.*

9.00pm: Return to the Robbins Conference Centre at the University of Plymouth.

FINI…

*Participants may wish to leave for Plymouth Train Station, last train to London 6.30pm.

Times are approximate and may change according to circumstances.

A minibus will be available to ferry Participants between Port Eliot and the Robbins Conference Centre at the University of Plymouth throughout the Symposium.

 

LOCATION:

LONG4'18''-LAT36'...057800M-236...A38-B3249...40-10-50-20-30...

Interstices will take place at Port Eliot House, St Germans in Cornwall, 20 minutes from the University of Plymouth accommodation at the Robbins Conference Centre. Transport will be provided for participants from the Consciousness Reframed Conference at Caerleon to the Robbins Conference Centre at the University of Plymouth.

Daily transport from the Robbins Centre to Port Eliot House will also be provided. A minibus will also be on hand to ferry small groups to and from the City. The short drive from the University to St Germans passes over the Tamar Suspension Bridge, running alongside the famous Brunel rail bridge, out into the Cornish countryside before entering the Port Eliot Estate.

The historic grounds of the Port Eliot House provide a unique landscape for the Symposium, and Port Eliot House itself provides an incomparable environment for the debate and production to be generated over the two days. Port Eliot House has a unique history, stretching back to the first landings of the Phoenician tin traders of the first millennium B.C. Rumours of Joseph of A’rimathéa visits are embedded in the surrounding hills, and the remnants of past civilisations and conquests pervade the structure of the building.

Now Port Eliot House is in stasis. For millennia it has transformed and evolved. Internal walls, which were once the external (Norman) shell, mark the various evolutionary stages, like the growth rings on an Oak. Now, with grade one listed status, it can no longer evolve. It is as if it has reached its mid-point, equidistant between the past and the future. Looking back at history, Interstices, the Architecture of Consciousness, finds the perfect location to study the present and envision the future.

INTERSTICES PRODUCTION CREDITS:

PRE-PRODUCTION:

Pascal Auberson

Chris Brown

Neil Dare-Williams

James Norwood

Annabelle Ozanne

Steve Smith (Original Engine Design)

Darrel Wilkins

 

PRODUCTION:

Everyone at INTERSTICES

 

POST-PRODUCTION:

Jon Bedworth

James Berryman

Pete Everett

George Grinsted

Robert Hocking

Eliot Lewis

Dan Livingstone

Mike Phillips

Chris Speed

Mark Swift

With Special thanks to Peregrine Eliot, 10th Earl of St Germans (2 January 1941 – 15 July 2016),

Jago Eliot, Lord Eliot (24 March 1966– 15 April 2006)

and the Port Eliot Estate for hosting the event.

 

 

 

PSALMS

PSALMS

Psalms is the Autonomous Wheelchair constructed by Guido Bugmann for Donald Rodney’s “Nine Night in Eldorado” at the South London Gallery, 1997.

“…Our fear of automata is again harnessed in Psalms, as the empty wheelchair courses through its various trajectories on a sad and lonely journey of life, a journey to nowhere. Its movements repeat like an ever recurring memory, a memory of another life and another journey, that of Donald Rodney’s father…”
(Exhibition brochure, Jane Bilton.)

Psalms. On show at Plymouth Council House for the Atlantic Project 28 September – 21 October 2018.

pslams1
psalms8pslams10

Psalms in action at the opening of “Nine Night in Eldorado” at the South London Gallery 1997.

Short article published in Mediaspace: http://www.mike-phillips.net/blog/wp-content/uploads/2014/01/mediaspace4.pdf
mediaspace4_page_21
mediaspace4_page_23

The work has also been exhibited at:

‘The political significance of Rodney’s work should not be underestimated, nor his legacy which continues to inspire younger artists.’ (Keith Piper, Co-curator).

The web page by Dr Guido Bugmann tracks the various installations and technical details for the commision.
http://www.tech.plym.ac.uk/soc/staff/guidbugm/wheelchair/wheelc.htm
(pdf version here).
20160922_160322
Reboot 2016.
The wheelchair uses 8 sonar sensors, shaft-encoders, a video camera and a rate gyroscope to determine its position. A neural network using normalised RBF nodes encodes the sequence of 25 semi-circular sequences of positions forming the trajectory. The control system comprises a laptop PC 586 running a control program written in CORTEX-PRO, and linked to a Rug Warrior board built around the 68000 microcontroller.
Original (with some tweaks) Psalms Code (courtesy of Guido Bugmann):

/*HS!!1*/// program WCSHOW8.CTX
 // version 14 Sept 97, 15 March 98, Feb 2003, Oct 2008 (INIVA gallery London)
 // version Nov 2009
 // version june - july 2012 Wellcome exhibition
 // wheelchair project 97
 // modif of wcshow2.ctx by adding correct
 // trajectories encoding from wcshow3.ctx
 //from wcshow6.ctx
/*SH1!1*///variables // new. Including room dimensions
 /*HS!!2*///----------comm variables--------------
 int Eureka =104, scope_data = 110, gyro_data = 105, gyro_reset = 125
 int reset_shaft_xyphi = 120
 int battery_data = 106
 int i
 int ok_byte = 33, ready_byte = 44, sending_byte = 55, command_send = 66
 int stop_byte = 77, stop_motors = 99, motor_command = 107
 int message_byte, b1, b2, b3
 int serial_count, serial_count_limit = 500
 // a smaller limit may be better
 float float_data
 int left_speed = 1, right_speed = 1
 float right_speed_desired = 0
 float left_speed_desired = 0
/*HH2!2*///----------position variables--------------
 float PI = 4*arctan(1)
float phi2, angle1, angle2, delta_phi, angle_difference
float x_wc=0, y_wc=0, phi_wc=0
 float x_wc_shift=0, y_wc_shift=0, phi_wc_shift=0
 float old_x_wc=0, old_y_wc=0, old_phi_wc=0
 float d_x_wc_dt, d_y_wc_dt, d_phi_wc_dt
 float x_wc_predict, y_wc_predict, phi_wc_predict
 float previous_phi_wc, previous_x_wc, previous_y_wc
float x_shaft, y_shaft, phi_shaft
 float w_x_shaft, w_y_shaft, w_phi_shaft, w0_shaft
 float d_x_shaft_dt, d_y_shaft_dt, d_phi_shaft_dt
 float old_x_shaft, old_y_shaft, old_phi_shaft
float phi_gyro, w0_gyro, w_phi_gyro, d_phi_gyro_dt
float phi_camera, w_phi_camera, d_phi_camera_dt
 float phi_cam1, phi_cam2
float x_sonar, y_sonar, phi_sonar
 float w0_sonar, w_x_sonar, w_y_sonar, w_phi_sonar
 float d_x_sonar_dt, d_y_sonar_dt, d_phi_sonar_dt
float last_time_shaft, last_time_motor
/*HH2!2*/// ---------wheelchair---------
 float delta_r = 56.5
 /*HH2!2*///----------R O O M Dimensions at Wellcome trust ------------
float max_room_xcm = 1000 //694
 float max_room_ycm = 1100 // 1005
 /*HH2!2*///----------trajectory---------------//new
 int section_list[32]
 /*HH2!2*///----------neural networks --------------- // new
float theta, theta_next, r
int steps_nb_max = 16
 int k, current_grid
 int section_size[40]
 /float section_boost= 150
 / old section based
 /float section_boost= 50
 / new node based for trajectory
float section_boost= 90 // test using 1. Should be 90
 / new node based for field drawing
/float section_boost= 0.3
 / very new node + sigma based for trajectory
 float forced_next_state
/*HH2!2*///----------speed variables----------// new
 float section_speed[200]
 int current_section, temp_left_speed, temp_right_speed
 /*HH2!2*///-----------sonar --------------
 int keyboard, color, sonar, alarm_persistence = 0
 float time_since_last_alarm = 0, time_of_last_alarm = 0
int min_range_check = 20, max_sonar_array = 1500
 int echo[max_sonar_array]
 float found_flag, xs, ys
 float sonar_dir[8+1], sonar_range[8+1], old_sonar_range[8+1], sonar_x[8+1], sonar_y[8+1]
 int front_obstacle, left_obstacle, right_obstacle
sonar_dir[1]=-PI/2
 sonar_dir[2]=3*PI/4
 sonar_dir[3]=-PI
 sonar_dir[4]=PI/2
 sonar_dir[5]=PI/2
 sonar_dir[6]=PI/2
 sonar_dir[7]=0
 sonar_dir[8]=PI/4
 sonar_x[1] = 0
 sonar_y[1]= -10
 sonar_x[2] = -26
 sonar_y[2]= 7
 sonar_x[3] = -26
 sonar_y[3]= 37
 sonar_x[4] = -26
 sonar_y[4]= 51
 sonar_x[5] = 0
 sonar_y[5]= 51
 sonar_x[6] = 26
 sonar_y[6]= 51
 sonar_x[7] = 26
 sonar_y[7]= 37
 sonar_x[8] = 26
 sonar_y[8]= 7
float sonar_d1, sonar_d2, sonar_d3, sonar_d4
 /*HH2!2*/// -----pcmcia card--------
//int porta = 256, portb = 257
 int porta = 8192, portb = 8193
 //int porta = 992, portb = 993
 //int porta = 640, portb = 641
 / (hex 100 as defined in the dc24cl.com call in autoexec.bat)
 / (hex 3e0 = 992 dec as defined in the dc24cl.com call in autoexec.bat)
 // hex 280 = dec 640
 /*HH2!2*///---------video------------
 float contrast_gain = 5
 float av_act =0, max_act = 0, w0_camera
 float xsize = 0.97
 float ysize = 0.97
 float xcentre = 0.5
 float ycentre = 0.5
 int nodes_id, max_search = 50
 int Houghinit = 0
/*HH2!1*///--- set serial port mode----
 //ctx_system "mode com1:1200,n,8,1"
// ctx_system "mode com2:9600,n,8,1"
 // int port = 2
// ctx_system "mode com1:9600,n,8,1"
 ctx_system "mode com1:9600,n,8,1,p"
 int port = 1
 //pak
/*HH1!1*///====defines buttons =========
 defpan progpan
 button "return", "poppan"
 butgap
 //button "Send", "gosub Send"
 button "STOP", "gosub stop_motors"
 button "Motors_on", "gosub send_motor_command"
 butgap
 //button "Scope", "gosub get_scope_data"
 button "Sonar", "gosub sonar_scan8"
 /butgap
 /button "Video", "gosub get_camera_data"
 /button "QC init", "gosub Houghinitqc"
 butgap
 button "Reset_Gyro", "gosub reset_gyro"
 button "Gyro", "gosub get_gyro_data"
 butgap
 button "Reset Shaft", "gosub reset_shaft_position_to_zero"
 button "Shaft_pos", "gosub get_shaft_position_data"
 butgap
 button "Track Pos.", "gosub procedure_track_position"
 pushpan progpan
/*HH1!1*/// graphs and windows
float max_steps = 255
graph_default_gridtype 0
 graph_default_axislabel "Steps", "Data"
 graph_default_range 0, 0, max_steps, 255
 makegraph Data
graph_default_axislabel "Range", "Sensor"
 graph_default_range 0, 0, float(max_sonar_array), 10
 makegraph Sonar
/win_default_range 0,0,1,1
 /makewin Image
win_default_range -50,-50,50+max_room_xcm,50+max_room_ycm
 makewin Room
//win_plot 0, 0, RED
 //win_line 1000, 1100, YELLOW
 /*HH121*/// Build network and define trajectory sections to be used
 /gosub build
 / needed only if image is used
/gosub build_a_circle
 /*HS!!2*/// define section list // new - need to reflect in "build trajectory"
 for i=1 to 31
 section_list[i]=-99
 next i
 // actually used sections:
 section_list[1]=1
 section_list[2]=2
 section_list[3]=3
 section_list[4]=4
 section_list[5]=13
 section_list[6]=14
 section_list[7]=15
 section_list[8]=16
 // needs to be in ascending order for now.
gosub build_trajectories
 /*HH1!1*/int ON_BRICKS = 0 //if on bricks = 1 -> sonar-based obst. avoid disabled
 int SIMULATION = 0
 // ---> clear and run the program if the values above are changed
 /*SH1!1*/end
 /*SS!!1*/
 /*HS!!1*/procedure_track_position: // Initial position to set here
/*HS!!2*/restart_procedure:
w0_shaft = 1
 w0_gyro=1
 w0_camera = 0
 w0_sonar =1
alarm_persistence=0
float l_arrow = max_room_xcm/15
/-----------------------------------
 /*HH2!2*/// initialise the robot position:
 /-----------------------------------
 // here we decide where he starts and where he looks.
 // in the final program, he will find out by himself.
/gosub procedure_self_localisation
// x_wc = max_room_xcm/2
 x_wc = 700 // For Wellcome Trust
 y_wc = 0
 phi_wc = PI/2 //towards middle of room = north
 // phi_wc = 0 // towards right.
 // phi_wc = PI/2 - 0.35 //North, slightly to the East.
/ to start the sequence:
 //set_grid_outputs Sequence_State, 1*section_boost
 set_grid_outputs Sequence_State, 5*section_boost
old_x_wc = x_wc
 old_y_wc = y_wc
 old_phi_wc = phi_wc
old_x_shaft = x_wc
 old_y_shaft = y_wc
 old_phi_shaft = phi_wc
gosub reset_shaft_position
 gosub reset_gyro
old_x_shaft=x_wc
 old_y_shaft=y_wc
 old_phi_shaft=phi_wc
if SIMULATION=1
 float next_x = x_wc
 float next_y = y_wc
 float next_phi = phi_wc
 endif
 /-----------------------------------
 /*HH2!2*/// initialise the robot's speed and speed per section // new
 /-----------------------------------
 float base_speed = 20 // 15 //13 //what is the max?
 // 20 is right for Wellcome
 left_speed = 1
 right_speed = 1
 gosub send_motor_command
 // to preload the capacitor and prevent
 // a start backwards
 /test
 /base_speed = 10
 left_speed_desired = 0
 right_speed_desired = 0
 /----------------------------------------
 // start large circles-------------
 for k = 0 to 12
 section_speed[k]=0.8*base_speed //was 0.7
 next k
 for k = 13 to 24
 section_speed[k]=0.8*base_speed
 next k
 for k = 25 to 48
 section_speed[k]=base_speed
 next k
 // start inwards spiral------------
 for k = 49 to 60
 section_speed[k]=0.9*base_speed
 next k
 for k = 61 to 70
 section_speed[k]=0.8*base_speed
 next k
 for k = 71 to 78
 section_speed[k]=0.6*base_speed
 next k
 for k = 79 to 83
 section_speed[k]=0.5*base_speed
 next k
 for k = 84 to 87
 section_speed[k]=0.4*base_speed
 next k
 // start figures of eight------------
 for k = 88 to 94
 section_speed[k]=0.4*base_speed
 next k
 for k = 95 to 156
 section_speed[k]=0.5*base_speed
 next k
 for k = 157 to 162
 section_speed[k]=0.4*base_speed
 next k
 // start outward spiral-------------
 for k = 163 to 166
 section_speed[k]=0.4*base_speed
 next k
 for k = 167 to 175
 section_speed[k]=0.5*base_speed
 next k
 for k = 176 to 185
 section_speed[k]=0.7*base_speed
 next k
 for k = 186 to 199
 section_speed[k]=0.8*base_speed
 next k
 /---------------------------------------------------------------
 /*HH2!2*/// Main control Loop (get position data, plot, compute controls)
 /---------------------------------------------------------------
win_active Room
 win_clear
 print "PRESS ANY KEY TO STOP (but not ESC)."
 / so as not to stop in the middle of a communication process
win_move 0,0
 win_line max_room_xcm,0, 4
 win_line max_room_xcm, max_room_ycm, 4
 win_line 0, max_room_ycm, 4
 win_line 0, 0, 4
init_timer
 last_time_shaft = 0
 last_time_motor = 0
/*HH2!2*/while !kbhit()
keyboard = port_in(96)
/ /my_sound 5000,30
 if elapsed_time(1) > 100000
 init_timer
 last_time_shaft = 0
 last_time_motor = 0
 endif
/*HH2!2*/ //gosub get_shaft_position_data
 if SIMULATION=1
 x_shaft= x_wc+ 0.3*(next_x-x_wc)
 y_shaft = y_wc+0.3*(next_y-y_wc)
 phi_shaft =next_phi// +0.1*(next_phi-phi_wc)
 else
 gosub get_shaft_position_data
 endif
w_x_shaft = 0.5
 w_y_shaft = 0.5
 w_phi_shaft = 0.25
if y_shaft=0 & x_shaft = 0
 goto restart_procedure
 endif
 /win_line x_shaft, y_shaft, WHITE
 /*HH2!2*/ gosub predict_current_shaft_position
/ gosub get_gyro_data
 / w_phi_gyro = 0.25
 / print " Phi G Y R O = ", phi_gyro*180/PI
/*HH2!2*/ // gosub get_sonar_data
 if SIMULATION!=1
 gosub get_sonar_data
 endif
 /*HH2!2*/ gosub check_sonar_validity
 phi_sonar = phi_sonar + 0.45*d_phi_shaft_dt
 / the 0.45 sets the value to that at the time when the RugWarrior
 / is updated (the same 0.45 is used to bring back to the present the
 / angles in "check_sonar_validity")
 /*HH2!2*/// if necessary (robot stuck in a corner), relocalise
 //if alarm_persistence > 6
 // gosub relocalise
 //endif
/ gosub get_camera_data
 / w_phi_camera = 0
 x_wc = (w_x_shaft*x_shaft+w_x_sonar*x_sonar)/(w_x_shaft+w_x_sonar)
 y_wc = (w_y_shaft*y_shaft+w_y_sonar*y_sonar)/(w_y_shaft+w_y_sonar)
//print "====++++++ y_wc = ", y_wc, " y_shaft = ", y_shaft
 /*SH2!2*///print "====++++++ x_wc = ", x_wc, " x_shaft = ", x_shaft
 /*HS!!2*/// calculate weight-averaged position values
angle1=phi_shaft
 angle2=phi_sonar
 gosub angle_diff
/ if abs(angle_difference) > 0.3*PI
 / w_phi_sonar=0
 / endif
 phi_wc = (phi_shaft*(w_phi_shaft+w_phi_sonar) + w_phi_sonar * angle_difference)/(w_phi_shaft+w_phi_sonar)
/----wc positions are estimated at time present---
 / later this will be a weighted average
 / also including time-delays and rates of changes d_bla_dt
 /*HH2!2*/// reset values in RugWarrior board
 if w_phi_sonar > 0 | w_x_sonar > 0 | w_y_sonar > 0
 angle1=0
 angle2=phi_wc
 gosub angle_diff
 phi_wc = angle_difference
 gosub reset_shaft_position
 /gosub reset_gyro
 / /my_sound 1000,30
 endif
 /*HH2!2*/ //----------position and sonar plot-----------
 win_fillellipse x_wc-0.1*l_arrow, y_wc-0.1*l_arrow, x_wc+0.1*l_arrow,y_wc+0.1*l_arrow, LBLUE
 win_move x_wc, y_wc
 win_line x_wc+l_arrow*cos(phi_wc), y_wc+l_arrow*sin(phi_wc), BLUE
//----------sonar plot---------------
 gosub sonar_plot
/------------------------------
 /*SH2!2*/// calculate a new motor command - using neural network
 /------------------------------
 / (simplified here)
 // left_speed = 15
 // right_speed = 15
 / gosub feedback_straight_line
gosub predict_position_of_control_action
 / gosub update_network
 / for the circle
 gosub update_network2
 gosub calculate_new_motor_command
 /*HS!!2*/// deal with obstacles (stop or avoid) or execute the motor command
temp_left_speed = left_speed
 temp_right_speed = right_speed
gosub sonar_safety // disabled for trial on bricks
/*HH232*/// -> avoidance manoeuvres (not if ON BRICKS)
 if ON_BRICKS = 1
 goto no_avoidance
 endif
 if alarm_counter <= 2 // >= 2
 int motion_duration = 4500 // 3200 // 2800 // 2400 for short avoidance moves
 /*SS!!3*/ // 1. case of obstacle in front -> turn in the previous direction
if front_obstacle =1 & left_obstacle = 0 & right_obstacle=0
if delta_phi < 0
 left_speed = 18 //10
 right_speed = 1
 else
 left_speed = 1
 right_speed = 18 //10
 endif
 gosub send_motor_command
 ms_delay motion_duration
print "STOP AFTER ESCAPE MANOEUVRE"
 left_speed = 0
 right_speed = 0
 gosub send_motor_command
 endif
/*HS!!3*/ // 2. left or right obstacle -> turn to escape // new 23.2.03
if left_obstacle = 1 & right_obstacle = 0
 left_speed = 18 //10
 right_speed = 1
 gosub send_motor_command
 ms_delay motion_duration //1500
 print "STOP AFTER ESCAPE MANOEUVRE"
 left_speed = 0
 right_speed = 0
 gosub send_motor_command
 endif
if right_obstacle = 1 & left_obstacle = 0
 left_speed = 1
 right_speed = 18 //10
 gosub send_motor_command
 ms_delay motion_duration //1500
 print "STOP AFTER ESCAPE MANOEUVRE"
 left_speed = 0
 right_speed = 0
 gosub send_motor_command
 endif
/*SH3!3*/ // 3. Obstacles on both sides -> do not move
 if right_obstacle = 1 & left_obstacle = 1
 left_speed = 1
 right_speed = 1
 gosub send_motor_command
 print "SQUEEZED ON BOTH SIDES"
 my_sound 3000, 150 // short high-pitched beep
 ms_delay 1500
 endif
/*HS!!3*/ // 4. If too close in front -> stop and beep (commented out)
/ if sonar_range[4]<0 & sonar_range[5]<0 & sonar_range[6]<0
 / my_sound 6000,100
 / endif
/*HH3!3*/ // 4. stop after the escape manoeuvre
//print "STOP AFTER ESCAPE MANOEUVRE" - now done inside each if above.
 //left_speed = 0
 //right_speed = 0
 //gosub send_motor_command
else
 /*HH2!2*/// or proceed with previous command
 left_speed = temp_left_speed
 right_speed = temp_right_speed
 // gosub send_motor_command //gb 23.10.08
endif
/*HH2!2*/no_avoidance:
gosub send_motor_command //gb 23.10.08
old_x_wc = x_wc
 old_y_wc = y_wc
 old_phi_wc = phi_wc
 // used also in predict and sonar plot...
 /pak
 /*SH2!2*/endwhile
//pak
 //int bla=getch()
gosub stop_motors
 /----------------------------------------------
 return
 /*SH1!1*/
 /*HS!!1*/predict_current_shaft_position: //delays adjustable
if SIMULATION=1
 return
 endif
float delay1 = 1.20
 float time_delay_shaft = elapsed_time(1)-last_time_shaft
 // 0.94= times in seconds to send back a value in phase with RugW.
 // previous 0.6, 0.5 was ok, 0.42, 0.695
 //print "---------- ", time_delay_shaft
 //NOTE: I assume that delay1 + delay2 (next subroutine) = total time delay
 // about 2.5 sec at Wellcome
float local_x, local_y, local_phi
d_x_shaft_dt = delay1*(x_shaft - old_x_shaft)/time_delay_shaft
 d_y_shaft_dt = delay1*(y_shaft - old_y_shaft)/time_delay_shaft
angle1 = old_phi_shaft
 angle2 = phi_shaft
 gosub angle_diff
d_phi_shaft_dt = delay1*(angle_difference)/time_delay_shaft
/d_phi_shaft_dt = delay1*(phi_shaft - old_phi_shaft)/time_delay_shaft
float delta_s = sqrt(d_x_shaft_dt * d_x_shaft_dt + d_y_shaft_dt * d_y_shaft_dt)
 local_x = x_shaft + delta_s*cos(phi_shaft+0.5*d_phi_shaft_dt)
 local_y = y_shaft + delta_s*sin(phi_shaft+0.5*d_phi_shaft_dt)
 local_phi = phi_shaft + d_phi_shaft_dt
old_x_shaft=x_shaft
 old_y_shaft=y_shaft
 old_phi_shaft=phi_shaft
x_shaft = local_x
 y_shaft = local_y
 phi_shaft = local_phi
/win_move local_x, local_y
last_time_shaft = elapsed_time(1)
return
 /*HH1!1*/predict_position_of_control_action: //delays adjustable
if SIMULATION!=1
 float delay2 = 1.3// 1.75
 float time_delay_motor = elapsed_time(1)-last_time_motor
 // previous 1.5
 //print "----------------", time_delay_motor
 float local_x, local_y, local_phi
d_x_wc_dt = delay2*(x_wc - old_x_wc)/time_delay_motor
 d_y_wc_dt = delay2*(y_wc - old_y_wc)/time_delay_motor
 d_phi_wc_dt = delay2*(phi_wc - old_phi_wc)/time_delay_motor
/old_x_wc=x_wc
 /old_y_wc=y_wc
 /old_phi_wc=phi_wc
float delta_s = sqrt(d_x_wc_dt * d_x_wc_dt + d_y_wc_dt * d_y_wc_dt)
 local_x = x_wc + delta_s*cos(phi_wc+0.5*d_phi_wc_dt)
 local_y = y_wc + delta_s*sin(phi_wc+0.5*d_phi_wc_dt)
 local_phi = phi_wc + d_phi_wc_dt
x_wc_predict = local_x
 y_wc_predict = local_y
 phi_wc_predict = local_phi
/win_move local_x, local_y
/printf "Prediction %f, %f, %f\n", x_wc_predict, y_wc_predict, phi_wc_predict*180/PI
last_time_motor = elapsed_time(1)
 else // in simulation mode
 x_wc_predict = x_wc
 y_wc_predict = y_wc
 phi_wc_predict = phi_wc
endif
 return
 /*HH121*/sonar_safety:
if ON_BRICKS = 1
 return
 endif
 // This routine stops the wheelchair until the obstacle has moved.
 // if after 1200 ms the obstacle is still there, the control is
 // returned to the main loop which will execute an escape movement.
int shaft_encoder_hold = 0
 int alarm_counter=0
/*HS!!2*/detect_an_obstacle:
 front_obstacle=0
 left_obstacle=0
 right_obstacle=0
//for sonar = 4 to 6
 // gosub fast_get_sonar_range
 // acquire fresh data in front
 // if sonar_range[sonar]<150 & sonar_range[sonar] > 0
 // front_obstacle = 1
 //print "FRONT = ", sonar, " range = ", sonar_range[sonar]
 //pak
 /goto sonar_stop
 // endif
 //next sonar
 // idea, not do all front sonars one after the other, to avoid stray echos?
 sonar =4
 gosub fast_get_sonar_range
 // acquire fresh data in front
 if sonar_range[sonar]<150 & sonar_range[sonar] > 0
 front_obstacle = 1
 endif
sonar = 2
 gosub fast_get_sonar_range
 sonar = 3
 gosub fast_get_sonar_range
 sonar =5
 gosub fast_get_sonar_range
 // acquire fresh data in front
 if sonar_range[sonar]<150 & sonar_range[sonar] > 0
 front_obstacle = 1
 endif
sonar = 7
 gosub fast_get_sonar_range
 sonar = 8
 gosub fast_get_sonar_range
 sonar =6
 gosub fast_get_sonar_range
 // acquire fresh data in front
 if sonar_range[sonar]<150 & sonar_range[sonar] > 0
 front_obstacle = 1
 endif
if sonar_range[2]<80 & sonar_range[2] > 0
 left_obstacle=1
 //print " sonar 2: ", sonar_range[2], " sonar 3: ", sonar_range[3]
 /goto sonar_stop
 endif
 if sonar_range[8]<80 & sonar_range[8] > 0
 right_obstacle=1
 /goto sonar_stop
 endif
 if sonar_range[3]<60 & sonar_range[3] > 0
 left_obstacle=1
 /goto sonar_stop
 endif
 if sonar_range[7]<60 & sonar_range[7] > 0
 right_obstacle=1
 /goto sonar_stop
 endif
if front_obstacle = 1 | left_obstacle=1 | right_obstacle=1
 goto sonar_stop
 endif
if shaft_encoder_hold = 1
 gosub reset_shaft_position
 //gosub reset_gyro
 endif
 //print "NO OBSTACLES : SAFE"
 return
 /--------------------------------
 /*HH2!2*/sonar_stop:
 alarm_counter +=1
left_speed = 0
 right_speed = 0
 gosub send_motor_command
 //my_sound 9000,100
 print "SONAR STOP ", "obstacles L F R=", left_obstacle, " ", front_obstacle, " ", right_obstacle
 ms_delay 1200
if shaft_encoder_hold = 0
 old_x_wc = x_wc
 old_y_wc = y_wc
 old_phi_wc = phi_wc
 shaft_encoder_hold = 1
 endif
if alarm_counter >= 2
if shaft_encoder_hold = 1
 gosub reset_shaft_position
 gosub reset_gyro
 endif
time_since_last_alarm = elapsed_time(1)-time_of_last_alarm
if time_since_last_alarm < 20
 alarm_persistence +=1
 else
 alarm_persistence =0
 endif
time_of_last_alarm = elapsed_time(1)
return
 endif
goto detect_an_obstacle
return
 /*HH121*/check_sonar_validity:
float tested_phi = phi_shaft - 0.45*d_phi_shaft_dt
 / a that point in the program, we still use the previous value of phi_wc.
 / therefore, the tested value
 float angle_tolerance = 12*PI/180
 // this tolerance should be dependent on the angular velocity.
 // when it is fast, there are less measurement points and
 // the tolerance should increase.
 float xy_tolerance = 40
w_x_sonar = 0
 w_y_sonar = 0
 w_phi_sonar = 0
x_sonar = 0
 y_sonar = 0
 phi_sonar = 0
found_flag =0
angle1=0
 angle2=tested_phi
 gosub angle_diff
/*SS!!2*/if abs(angle_difference) < angle_tolerance
 // case phi=0; look at the curtain: sonar_d2
 if abs(sonar_d2 - max_room_ycm + y_wc) < xy_tolerance & sonar_range[3]>0
 y_sonar += max_room_ycm - sonar_d2
 phi_sonar = 0
 w_y_sonar +=0.25
 w_phi_sonar += 0.25
 found_flag +=0.25
 /my_sound 2000,100
 endif
if abs(sonar_d1 - x_wc) < xy_tolerance & sonar_range[1]>0
x_sonar += sonar_d1
 phi_sonar = 0
 w_x_sonar +=0.25
 w_phi_sonar += 0.25
 found_flag +=0.25
 /my_sound 2000,100
 endif
if abs(sonar_d3 - max_room_xcm + x_wc) < xy_tolerance & sonar_range[5]>0
x_sonar += max_room_xcm - sonar_d3
 phi_sonar = 0
 w_x_sonar +=0.25
 w_phi_sonar += 0.25
 found_flag +=0.25
 /my_sound 2000,100
 endif
endif
/*HS!!2*/if abs(angle_difference-0.5*PI) < angle_tolerance
 // case phi=PI/2; look at the nearest side wall or both d2 left, d4 right
if abs(sonar_d4 - max_room_xcm + x_wc) < xy_tolerance & sonar_range[7]>0
 x_sonar += max_room_xcm - sonar_d4
 phi_sonar = 0.5*PI
 w_x_sonar += 0.25
 w_phi_sonar += 0.25
 found_flag +=0.25
 /my_sound 3000,100
 endif
if abs(sonar_d2 - x_wc) < xy_tolerance & sonar_range[3]>0
 x_sonar += sonar_d2
 phi_sonar = 0.5*PI
 w_x_sonar += 0.25
 w_phi_sonar += 0.25
 found_flag +=0.25
 /my_sound 3000,100
 endif
if abs(sonar_d1 - y_wc) < xy_tolerance & sonar_range[1]>0
 y_sonar += sonar_d1
 phi_sonar = 0.5*PI
 w_y_sonar += 0.25
 w_phi_sonar += 0.25
 found_flag +=0.25
 /my_sound 3000,100
 endif
if abs(sonar_d3 - max_room_ycm + y_wc) < xy_tolerance & sonar_range[5]>0
 y_sonar += (max_room_ycm - sonar_d3)
 phi_sonar = 0.5*PI
 w_y_sonar += 0.25
 w_phi_sonar += 0.25
 found_flag +=0.25
 /my_sound 3000,100
 endif
endif
/*HH2!2*/if abs(abs(angle_difference)-PI) < angle_tolerance
 // case phi=PI; look at the curtain d4
 if abs(sonar_d4 - max_room_ycm + y_wc) < xy_tolerance & sonar_range[7]>0
 y_sonar += max_room_ycm - sonar_d4
 phi_sonar = PI
 w_y_sonar += 0.25
 w_phi_sonar += 0.25
 found_flag +=0.25
 /my_sound 5000,100
 endif
if abs(sonar_d2 - y_wc) < xy_tolerance & sonar_range[3]>0
 y_sonar += sonar_d2
 phi_sonar = PI
 w_y_sonar += 0.25
 w_phi_sonar += 0.25
 found_flag +=0.25
 /my_sound 5000,100
 endif
if abs(sonar_d3 - x_wc) < xy_tolerance & sonar_range[5]>0
 x_sonar += sonar_d3
 phi_sonar = PI
 w_x_sonar += 0.25
 w_phi_sonar += 0.25
 found_flag +=0.25
 /my_sound 5000,100
 endif
if abs(sonar_d1 - max_room_xcm + x_wc) < xy_tolerance & sonar_range[1]>0
 x_sonar += max_room_xcm - sonar_d1
 phi_sonar = PI
 w_x_sonar += 0.25
 w_phi_sonar += 0.25
 found_flag +=0.25
 /my_sound 5000,100
 endif
endif
 /*HH2!2*/if abs(angle_difference+0.5*PI) < angle_tolerance
// case phi = -PI/2; look at side walls d4 (small x) and d2 large x
 if abs(sonar_d4 - x_wc) < xy_tolerance & sonar_range[7]>0
 x_sonar += sonar_d4
 phi_sonar = -0.5*PI
 w_x_sonar += 0.25
 w_phi_sonar += 0.25
 found_flag +=0.25
 /my_sound 4000,100
 /print " Phi_wc= ", phi_wc, " Phi_sonar = ", phi_sonar
 /pak
 endif
 if abs(sonar_d2 - max_room_xcm + x_wc) < xy_tolerance & sonar_range[3]>0
 x_sonar += max_room_xcm - sonar_d2
 phi_sonar = -0.5*PI
 w_x_sonar += 0.25
 w_phi_sonar += 0.25
 found_flag +=0.25
 /my_sound 4000,100
 endif
if abs(sonar_d1 - max_room_ycm + y_wc) < xy_tolerance & sonar_range[1]>0
 y_sonar += max_room_ycm - sonar_d1
 phi_sonar = -0.5*PI
 w_y_sonar += 0.25
 w_phi_sonar += 0.25
 found_flag +=0.25
 /my_sound 4000,100
 endif
if abs(sonar_d3 - y_wc) < xy_tolerance & sonar_range[5]>0
 y_sonar += sonar_d3
 phi_sonar = -0.5*PI
 w_y_sonar += 0.25
 w_phi_sonar += 0.25
 found_flag +=0.25
 /my_sound 4000,100
 endif
endif
/*HH2!2*/// average values
 if w_x_sonar > 0
 x_sonar = 0.25*x_sonar/w_x_sonar
 endif
if w_y_sonar > 0
 y_sonar = 0.25*y_sonar/w_y_sonar
 endif
w_x_sonar = 0.8*w_x_sonar
 w_y_sonar = 0.8*w_y_sonar
 w_phi_sonar = 0.8*w_phi_sonar
 /print "ywc = ", y_wc, " ysonar= ",y_sonar
return
 /*HH121*/relocalise:
print "RELOCALISE"
 alarm_persistence=0
/*HS!!2*/if sonar_d2 < 120 & sonar_range[3]>0
 // left turn desired
 angle2 = phi_wc
 angle1= 0
 gosub angle_diff
if abs(angle_difference) < PI/5
 y_shaft = max_room_ycm - 100
 y_sonar = y_shaft
 w_y_sonar = 1
 w_y_shaft = 1
 return
 endif
if abs(angle_difference)-0.5*PI < PI/5
 x_shaft = 100
 x_sonar = x_shaft
 w_x_sonar = 1
 w_x_shaft = 1
 return
 endif
if abs(angle_difference)+0.5*PI < PI/5
 x_shaft = max_room_xcm - 100
 x_sonar = x_shaft
 w_x_sonar = 1
 w_x_shaft = 1
 return
 endif
if abs(angle_difference)-PI < PI/5
 y_shaft = 100
 y_sonar = y_shaft
 w_y_sonar = 1
 w_y_shaft = 1
 return
 endif
return
endif
 /*HH2!2*/if sonar_d4 < 120 & sonar_range[7]>0
 // right turn desired
 angle2 = phi_wc
 angle1= 0
 gosub angle_diff
if abs(angle_difference)-PI < PI/5
 y_shaft = max_room_ycm - 100
 y_sonar = y_shaft
 w_y_sonar = 1
 w_y_shaft = 1
 return
 endif
if abs(angle_difference)-0.5*PI < PI/5
 x_shaft = max_room_xcm - 100
 x_sonar = x_shaft
 w_x_sonar = 1
 w_x_shaft = 1
 return
 endif
if abs(angle_difference)+0.5*PI < PI/5
 x_shaft = 100
 x_sonar = x_shaft
 w_x_sonar = 1
 w_x_shaft = 1
 return
 endif
if abs(angle_difference) < PI/5
 y_shaft = 100
 y_sonar = y_shaft
 w_y_sonar = 1
 w_y_shaft = 1
 return
 endif
return
endif
/*HH2!2*/if sonar_d3 < 120 & sonar_range[5]>0
 // heading towards a wall
 angle2 = phi_wc
 angle1= 0
 gosub angle_diff
if abs(angle_difference)-0.5*PI < PI/5
 y_shaft = max_room_ycm - 100
 y_sonar = y_shaft
 w_y_sonar = 1
 w_y_shaft = 1
 return
 endif
if abs(angle_difference) < PI/5
 x_shaft = max_room_xcm - 100
 x_sonar = x_shaft
 w_x_sonar = 1
 w_x_shaft = 1
 return
 endif
if abs(angle_difference)-PI < PI/5
 x_shaft = 100
 x_sonar = x_shaft
 w_x_sonar = 1
 w_x_shaft = 1
 return
 endif
if abs(angle_difference)+0.5*PI < PI/5
 y_shaft = 100
 y_sonar = y_shaft
 w_y_sonar = 1
 w_y_shaft = 1
 return
 endif
return
endif
return
 /*SH1!1*/
 /*SS!!1*/calculate_new_motor_command:
float next_x = node_output(first_node(Next_Position))
 float next_y = node_output(first_node(Next_Position)+1)
 float next_phi = node_output(first_node(Next_Position)+2)
win_fillellipse next_x-0.1*l_arrow, next_y-0.1*l_arrow, next_x+0.1*l_arrow, next_y+0.1*l_arrow, RED
 win_move next_x, next_y
 win_line next_x+l_arrow*cos(next_phi), next_y+l_arrow*sin(next_phi), YELLOW
/*HS!!2*/// next point as target. phi2 = angle to target
float dx = next_x - x_wc
 float dy = next_y - y_wc
 float dist = sqrt(dx*dx+dy*dy)
phi2 = 999
if dx=0 & dy >0
 phi2 = PI/2
 endif
 if dx=0 & dy <0
 phi2 = -PI/2
 endif
phi2 = arctan(dy/dx)
if dy<0 & dx<0
 phi2 = phi2 + PI
 endif
 if dy>=0 & dx<0
 phi2 = phi2 + PI
 endif
 if dy<0 & dx>0
 phi2 =phi2 + 2*PI
 // to avoid all negative angles
 endif
 // phi2 is where the robot should point to reach the target
/*HH2!2*/// ---- difference between current robot heading phi_wc and desired one phi2
/ try use the predicted angle when the action starts:
 / not a good idea, there are big jumps in prediction
 / when sonar data are recorded
 //
 //float phip = phi_wc_predict
 // instead of phi_wc
float sind=(cos(phi2)*sin(phi_wc)-sin(phi2)*cos(phi_wc))
 float cosd = (cos(phi2)*cos(phi_wc)+sin(phi2)*sin(phi_wc))
 float local_tan = sind/cosd
 float delta_phi = arctan(local_tan)
if sind >=0 & cosd <0
 delta_phi += PI
 endif
 if sind < 0 & cosd < 0
 delta_phi -= PI
 endif
 delta_phi =-delta_phi
/*SH2!2*//----speed control ------- new - Wellcome trust
base_speed = section_speed[current_section]
 float dist0=1 //m
 float w1 = 1/(1+exp(-(0.01*dist-dist0))) //dist in m, for control by angle to target
 float w2 = 1/(1+exp(+(0.01*dist-dist0))) //dist in m, for control by heading diff (delta_phi)
 // distance effect
 //print "w1= ", w1, " w2= ", w2
 angle1 = phi_wc
 angle2 = next_phi
 gosub angle_diff
 //float exp2 =exp(-0.5*(angle_difference)*(angle_difference))
 // direction effect
float speed = 0.5*base_speed*(1+exp(-abs(delta_phi)))
 // all about adjusting the speed to angle differences
 if phi2 = 999
 speed = 0
 endif
 //print "base_speed= ", base_speed, " speed = ", speed
//float exp3 =exp(-0.5*delta_phi*delta_phi)
 float sp_gain = 1.1 //1 alllowed too much excursion // 0.82+0.9*(exp3)
 float vl1 = 2*speed/(1+exp( + sp_gain*delta_phi) )
 float vr1 = 2*speed/(1+exp( - sp_gain*delta_phi) )
 float vl2 = 2*speed/(1+exp( + sp_gain*angle_difference) )
 float vr2 = 2*speed/(1+exp( - sp_gain*angle_difference) )
 print "vl1= ", vl1, " vr1= ",vr1
 print "vl2= ", vl2, " vr2= ",vr2
 left_speed_desired = fmax(1, w1*vl1 + w2*vl2)
 right_speed_desired = fmax(1, w1*vr1 + w2*vr2)
/float sp_gain = 0.5+0.3*speed/15
 /float sp_mom = 0.2
 /left_speed_desired = sp_mom*left_speed_desired + (1-sp_mom)*speed/(1+exp( + sp_gain*(delta_phi)) )
 /right_speed_desired = sp_mom*right_speed_desired + (1-sp_mom)*speed/(1+exp( - sp_gain*(delta_phi)) )
 /float lambda = 0.4
 //old good values
 //----------
 ///float speed = 0.7*base_speed + 0.3*base_speed*(exp1*exp2*exp3)
 ///float sp_gain = 0.82+0.9*(exp3)
 float sp_mom = 0.2 // 0.22+(0.8/speed)
 left_speed_desired = (1-sp_mom)*left_speed_desired + sp_mom*float(left_speed)
 right_speed_desired = (1-sp_mom)*right_speed_desired + sp_mom*float(right_speed)
 //float lambda = 0.54*sp_gain
 // seemed promissing during the night
 //float lambda = 0.5 //gbtest 090712
 //-----------
 left_speed = int(left_speed_desired)
 right_speed = int(right_speed_desired)
 print "Ldes = ", left_speed_desired , " Rdes. = ", right_speed_desired, " phi_wc = ", 180*phi_wc/PI
 ///left_speed = int(left_speed_desired + lambda*(left_speed_desired -float(left_speed)))
 ///right_speed = int(right_speed_desired + lambda*(right_speed_desired -float(right_speed)))
/*SS!!2*/return
 /*HS!!1*/send_motor_command: //motor bias in here.
//int direction_byte = 0
 //if right_speed_desired >= 0
 // direction_byte +=1
 //endif
 //if left_speed_desired >= 0
 // direction_byte +=2
 //endif
 // TO include when backwards motion is
 // considered in the RugWar board
message_byte = 107
 gosub handshake_send_byte
// speeds must not exceed +-100
message_byte = int(0.80*float(left_speed)) + 128
 //0.8 foro Wellcome, was 0.85
 //message_byte = int(1.1*float(left_speed)) + 128
 // the coef 1.1 compensates for an asymetry between left and right
 // in the Penny and Gilles system.
 // seems to be thr other way now (Nov 2009) right wheel is slugish.
 gosub handshake_send_byte
//message_byte = right_speed + 128
 message_byte = int(1.15*float(right_speed)) + 128
gosub handshake_send_byte
//message_byte = direction_byte
 //gosub handshake_send_byte
 // TO include when backwards motion is
 // considered in the RugWar board
 print "sent L=",left_speed, " R=", right_speed
 return
 /*HH1!1*/stop_motors:
message_byte = 99
 gosub handshake_send_byte
return
 /*SH1!1*/
 /*HS!!1*/get_gyro_data:
message_byte = 105
 gosub handshake_send_byte
gosub get_float_data
 phi_gyro = 1.26*PI*float_data/180
/ print "phi_gyro = ", 180*phi_gyro/PI
return
/*HH1!1*/reset_gyro:
 / gb 23.02.03 : gyro not used.
 / message_byte = 125
 / gosub handshake_send_byte
 /
 / float_data = 180*phi_wc/PI
 / gosub send_float_data
return
/*HH1!1*/get_battery_data:
 int b3,b2,b1
 message_byte = 106
 gosub handshake_send_byte
gosub get_float_data
return
/*HH1!1*/get_shaft_position_data:
 int b3,b2,b1
message_byte = 100
 gosub handshake_send_byte
// debug
 //gosub get_float_data
 //float rr = float_data
//gosub get_float_data
 //float ll = float_data
 // end debug
gosub get_float_data
 x_shaft = float_data
 /pak
 gosub get_float_data
 y_shaft = float_data
 /pak
 gosub get_float_data
 phi_shaft = PI*float_data/180
/print "x_shaft= ", x_shaft, ", y_shaft= ", y_shaft, ", phi_shaft= ", 180*phi_shaft/PI
return
 /*HH1!1*/reset_shaft_position:
message_byte = 120
 gosub handshake_send_byte
float_data = x_wc
 gosub send_float_data
float_data = y_wc
 gosub send_float_data
float_data = 180*phi_wc/PI
 gosub send_float_data
return
/*HH1!1*/reset_shaft_position_to_zero:
 x_wc = 0
 y_wc = 0
 gosub reset_shaft_position
 return
 /*SH1!1*/
 /*HS!!1*/float_to_three_bytes:
b3=0
 int Rest, Counter_base = 128
while float_data > 32000.0
 float_data -= 32000.0
 b3 += 1
 endwhile
while float_data < -32000.0
 float_data += 32000.0
 b3 += 1
 endwhile
Rest = int(float_data)
 if float_data < 0.0
 Rest = -int(float_data)
 b3 = b3 + Counter_base
 endif
b1 = land(Rest,255)
 b2 = Rest/256
// print " ---- b3= ", b3, " b2= ",b2, " b1= ", b1
return
/*HH1!1*/get_float_data:
gosub handshake_read_byte
 b3=in_byte
 gosub handshake_read_byte
 b2=in_byte
 gosub handshake_read_byte
 b1=in_byte
if b3 < 128
 float_data = 32000 * float(b3) + 256 * float(b2) + float(b1)
 endif
 if b3 >= 128
 float_data = - 32000 * float(b3-128) - 256 * float(b2) - float(b1)
 endif
//print " ---- b3= ", b3, " b2= ",b2, " b1= ", b1, " --> Float_data = ", float_data
return
/*HH1!1*/send_float_data:
gosub float_to_three_bytes
message_byte = b3
 gosub handshake_send_byte
message_byte = b2
 gosub handshake_send_byte
message_byte = b1
 gosub handshake_send_byte
return
/*HH1!1*/handshake_send_byte:
int in_byte, serial_count=0
 / print " "
in_byte = serial_byte_in(port)
 // just to empty the buffer of a possible ok_byte waiting
send_byte_loop:
 serial_count +=1
 serial_byte_out port, message_byte
 in_byte = serial_byte_in(port)
 / printf " %d,", in_byte
 if (in_byte != ok_byte) & (serial_count < serial_count_limit)
 goto send_byte_loop
 endif
if serial_count >= serial_count_limit
 print " "
 print "No response from RugWarrior while sending."
 print "Please press ESC, and reset the RugWarrior board."
 beep
 pak
 endif
 return
/*HH1!1*/handshake_read_byte:
int in_byte, in_byte1,serial_count=0
 / print " "
read_loop:
 serial_count += 1
 in_byte = serial_byte_in(port)
 in_byte1 = serial_byte_in(port)
 / printf " %d-%d,", in_byte, in_byte1
 if ((in_byte = 300) & (serial_count < serial_count_limit)) | (in_byte1 != in_byte )
 goto read_loop
 endif
 / printf " ->%d,", in_byte
if serial_count >= serial_count_limit
 print " "
 print "No response from RugWarrior while reading."
 print "Please reset it and check cables or power supply."
 beep
 pak
 endif
serial_byte_out port, ok_byte
 return
/*SH1!1*/
 /*HS!!1*/build:
 //dg 20,15,STANDARD_NODE_TYPE
 dg 40,30,STANDARD_NODE_TYPE
 //dg 54,40,STANDARD_NODE_TYPE
 mg contrast
 grid_nodestyle contrast, GREYSC_DISP_STYLE
 drawgrid contrast
mg Hough
 grid_nodestyle Hough, GREYSC_DISP_STYLE
 drawgrid Hough
dg 8,3, STANDARD_NODE_TYPE
 mg Average
 grid_nodestyle Average, GREYSC_DISP_STYLE
 drawgrid Average
 return
 /*HH121*/build_trajectories: // new incl circle diameter + number of elements
 float x1,y1,phi1, x1next, y1next, phi1next
 float theta, theta_next, r
 int nodes_id
/*HS!!2*/// neural net sizes definition
 / two circles
 section_size[1]=12
 section_size[2]=12
 section_size[3]=12
 section_size[4]=12
/inward spiral
 section_size[5]=12
 section_size[6]=10
 section_size[7]=8
 section_size[8]=5
 section_size[9]=5
/ four eights
 /section_size[10]=6
 /section_size[11]=6
 /section_size[12]=6
 /section_size[9]=6
section_size[10]=6
 section_size[11]=6
 section_size[12]=6
 section_size[13]=6
section_size[14]=6
 section_size[15]=6
 section_size[16]=6
 section_size[17]=6
section_size[18]=6
 section_size[19]=6
 section_size[20]=6
 section_size[21]=6
/ outward spiral
 /section_size[26]=3
 /section_size[27]=4
 section_size[22]=6
 section_size[23]=8
 section_size[24]=10
 section_size[25]=12
int net_size = 0
 //for i = 1 to 31
 //for i = 1 to 4 //31 : 1 -> 4 = only encode the two outer circles. Amsterdam
 / net_size += section_size[i]
 //next i
 // WELLCOME TRAJECTORY
 for i=1 to 31 // Wellcome
 if section_list[i]!=-99
 net_size+=section_size[section_list[i]]
 endif
 next i
 /*HH2!2*// nn build
dg 1,2, INPUT_NODE_TYPE
 mg Position
 grid_nodestyle Position, GREYSC_DISP_STYLE
 drawgrid Position
dg 12, int(float(net_size)/12)+1, GRBF_NODE_TYPE
 mg Trajectory
 grid_nodestyle Trajectory, GREYSC_DISP_STYLE
 drawgrid Trajectory
dg 1, 1, LINEAR_NODE_TYPE
 mg Sequence_State
 grid_nodestyle Sequence_State, GREYSC_DISP_STYLE
 drawgrid Sequence_State
/dg 1, 1, LINEAR_NODE_TYPE
 mg Special2
 grid_nodestyle Special2, GREYSC_DISP_STYLE
 drawgrid Special2
dg 1,4, LINEAR_NODE_TYPE
 mg Next_Position
 grid_nodestyle Next_Position, GREYSC_DISP_STYLE
 drawgrid Next_Position
connecttype ALL_TO_ALL
 connect Position, Trajectory
 connect Trajectory, Next_Position
 connect Trajectory, Sequence_State
set_weights 0
 set_grid_inweights Trajectory, 999999
 int section = 1
 int last_node = first_node(Trajectory)-1
 for nodes_id = last_node+1 to last_node+section_size[1]
 congridnode Sequence_State, nodes_id
 / set_weightsbetnodes first_node(Sequence_State), nodes_id, section_boost*(float(section)-0.5)
 next nodes_id
 last_node = last_node+section_size[1]
section=2
 for nodes_id = last_node+1 to last_node + section_size[2]
 congridnode Special2, nodes_id
 /set_weightsbetnodes first_node(Special2), nodes_id, section_boost*(float(section)-0.5)
 / old section based
 set_weightsbetnodes first_node(Special2), nodes_id, section_boost*(float(nodes_id - first_node(Trajectory) + 1)-0.5)
 / new node based
 next nodes_id
 last_node= last_node + section_size[2]
connect Sequence_State, Special2
 set_weightsbetgrids Sequence_State, Special2, 1
for nodes_id = last_node+1 to first_node(Trajectory)+net_size -1
 congridnode Sequence_State, nodes_id
 next nodes_id
uw
/*SH2!2*/// trajectory dimensions + margins
//r=350-margin of error due to low gain
 /float margin = 100
 / setting for gallery
float margin = 50
 / setting for Amsterdam 09 and Wellcome 2012
/float margin = 30
 / setting for drawing
float r0 = 300// 200 : a circle with a diam of 6m, for Wellcome Trust
 float dist_from_back_wall = 150 //not used?
 float xc= 0.5*max_room_xcm //center of circle
 //float yc= max_room_ycm - dist_from_back_wall - r - margin
 float yc=0.5*max_room_ycm+50 //center of circle
 //float yc=max_room_ycm - xc //iniva
 //float yc=410 //Amsterdam
 /*HS!!2*//first 1/2 circle, section 1
 last_node = first_node(Trajectory)-1
 section = 1
 float start_angle = PI
 float rotation_dir = 1
 // 1 = anticlockwise
 // opposite (clockwise) = -1
 r=r0-margin-10
 xc=0.5*max_room_xcm
 /*SH2!2*/gosub encode_half_circle
 /*HS!!2*//second 1/2 circle, section 2
 section = 2
 start_angle = 0
 // opposite (clockwise) = -1
 r=r0-margin-10
 xc=0.5*max_room_xcm
 /*SH2!2*/gosub encode_half_circle
 /*HS!!2*//third 1/2 circle, section 3
 section = 3
 start_angle = PI
 // opposite (clockwise) = -1
 r=r0-margin-10
 xc=0.5*max_room_xcm
 /*SH2!2*/gosub encode_half_circle
 /*HS!!2*//fourth 1/2 circle, section 4
 section = 4
 start_angle = 0
 // opposite (clockwise) = -1
 r=r0-margin-10
 xc=0.5*max_room_xcm
 /*SH2!2*/gosub encode_half_circle
 /*HS!!2*//spiral inward, section 5
 section = 5
 start_angle = PI
 // opposite (clockwise) = -1
 r=(5/6)*(r0-margin)
 xc=0.5*max_room_xcm-(0.5/6)*(r0-margin)
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*//spiral inward, section 6
 section = 6
 start_angle = 0
 // opposite (clockwise) = -1
 r=(4/6)*(r0-margin)
 xc=0.5*max_room_xcm+(0.25/6)*(r0-margin)
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*//spiral inward, section 7
 section = 7
 start_angle = PI
 // opposite (clockwise) = -1
 r=(3/6)*(r0-margin)
 xc=0.5*max_room_xcm-(0.5/6)*(r0-margin)
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*//spiral inward, section 8 == 8
 section = 8
 start_angle = 0
 // opposite (clockwise) = -1
 r=(2.2/6)*(r0-margin)
 xc=0.5*max_room_xcm
 /*SH2!2*//gosub encode_half_circle
 /*HS!!2*//---spiral inward, section 9
 section = 9
 start_angle = PI
 // opposite (clockwise) = -1
 r=(1/6)*(r0-margin)
 xc=0.5*max_room_xcm-(0.5/6)*(r0-margin)
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*//--- figure of 8, section 10
 section = 10
 start_angle = PI
 rotation_dir = -1
 // opposite (clockwise) = -1
 r=0.5*(r0-margin)
 xc=0.5*max_room_xcm+r
 /*SH2!2*//gosub encode_half_circle
 /*HS!!2*// ---figure of 8, section 11
 section = 11
 start_angle = 0
 rotation_dir = -1
 // opposite (clockwise) = -1
 r=0.5*(r0-margin)
 xc=0.5*max_room_xcm+r
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*// ---figure of 8, section 12
 section = 12
 start_angle = 0
 rotation_dir = 1
 // opposite (clockwise) = -1
 r=0.5*(r0-margin)
 xc=0.5*max_room_xcm-r
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*// figure of 8, section 13
 section = 13
 start_angle = PI
 rotation_dir = 1
 // opposite (clockwise) = -1
 r=(1/2)*(r0-margin)
 xc=0.5*max_room_xcm-r
 //r=(1/6)*(r0-margin)
 //xc=0.5*max_room_xcm-r-(0.25/6)*(r0-margin)
 /*SH2!2*/gosub encode_half_circle
 /*HS!!2*// figure of 8, section 14
 section = 14
 start_angle = PI
 rotation_dir = -1
 // opposite (clockwise) = -1
 r=0.5*(r0-margin)
 xc=0.5*max_room_xcm+r
 /*SH2!2*/gosub encode_half_circle
 /*HS!!2*// figure of 8, section 15
 section = 15
 start_angle = 0
 rotation_dir = -1
 // opposite (clockwise) = -1
 r=0.5*(r0-margin)
 xc=0.5*max_room_xcm+r
 /*SH2!2*/gosub encode_half_circle
 /*HS!!2*// figure of 8, section 16
 section = 16
 start_angle = 0
 rotation_dir = 1
 // opposite (clockwise) = -1
 r=0.5*(r0-margin)
 xc=0.5*max_room_xcm-r
 /*SH2!2*/gosub encode_half_circle
 /*HS!!2*// figure of 8, section 17
 section = 17
 start_angle = PI
 rotation_dir = 1
 // opposite (clockwise) = -1
 r=0.5*(r0-margin)
 xc=0.5*max_room_xcm-r
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*// figure of 8, section 18
 section = 18
 start_angle = PI
 rotation_dir = -1
 // opposite (clockwise) = -1
 r=0.5*(r0-margin)
 xc=0.5*max_room_xcm+r
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*// figure of 8, section 19
 section = 19
 start_angle = 0
 rotation_dir = -1
 // opposite (clockwise) = -1
 r=0.5*(r0-margin)
 xc=0.5*max_room_xcm+r
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*// figure of 8, section 20
 section = 20
 start_angle = 0
 rotation_dir = 1
 // opposite (clockwise) = -1
 r=0.5*(r0-margin)
 xc=0.5*max_room_xcm-r
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*// figure of 8, section 21
 section = 21
 start_angle = PI
 rotation_dir = 1
 // opposite (clockwise) = -1
 r=0.5*(r0-margin)
 xc=0.5*max_room_xcm-r
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*// figure of 8, section 22
 section = 22
 start_angle = PI
 rotation_dir = -1
 // opposite (clockwise) = -1
 r=0.5*(r0-margin)
 xc=0.5*max_room_xcm+r
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*// figure of 8, section 23
 section = 23
 start_angle = 0
 rotation_dir = -1
 // opposite (clockwise) = -1
 r=0.5*(r0-margin)
 xc=0.5*max_room_xcm+r
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*// figure of 8, section 24
 section = 24
 start_angle = 0
 rotation_dir = 1
 // opposite (clockwise) = -1
 r=0.5*(r0-margin)
 xc=0.5*max_room_xcm-r
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*// figure of 8, section 25
 section = 25
 start_angle = PI
 // opposite (clockwise) = -1
 r=0.5*(r0-margin)
 xc=0.5*max_room_xcm-r
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*//--spiral outward, section 26
 section = 26
 start_angle = 0
 // opposite (clockwise) = -1
 r=0.5*(1/4)*(r0-margin)
 xc=0.5*max_room_xcm-r
 /*SH2!2*//gosub encode_half_circle
 /*HS!!2*//---spiral outward, section 27
 section = 27
 start_angle = PI
 // opposite (clockwise) = -1
 r=0.5*((1/4)+(1/3))*(r0-margin)
 xc=0.5*max_room_xcm
 /*HH2!2*//gosub encode_half_circle
 /*HH2!2*//spiral outward, section 28 == 22
 section = 28
 start_angle = 0
 // opposite (clockwise) = -1
 r=(2/7)*(r0-margin)
 /r=0.5*((2/4)+(1/3))*(r0-margin)
 xc=0.5*max_room_xcm-r
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*//spiral outward, section 29 == 23
 section = 29
 start_angle = PI
 // opposite (clockwise) = -1
 r=0.5*((2/4)+(2/3))*(r0-margin)
 xc=0.5*max_room_xcm
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*//spiral outward, section 30 == 24
 section = 30
 start_angle = 0
 // opposite (clockwise) = -1
 /r=(4/6)*(r0-margin)
 r=0.5*((3/4)+(2/3))*(r0-margin)
 xc=0.5*max_room_xcm-(0.5/6)*(r0-margin)
 /*HH2!2*///gosub encode_half_circle
 /*HH2!2*//spiral outward, section 31 == 25
 section = 31
 start_angle = PI
 // opposite (clockwise) = -1
 //new 1/2 circle
 //r=r0-margin-10
 //xc=0.5*max_room_xcm
 //old 1/2 spiral out
 r=0.5*((3/4)+(3/3))*(r0-margin)
 xc=0.5*max_room_xcm+(0.25/6)*(r0-margin)
 /*SH2!2*///gosub encode_half_circle
 /*HS!!2*/// set reciprocal weights (trajectory <-> Sequence_State)
int section, last_node=first_node(Trajectory)-1
 float previous_phase = 0, current_phase = 0
//for section = 1 to 31
 //for section = 1 to 4 //31 // only 2 outer circles. Amsterdam
 for i = 1 to 31
 section = section_list[i]
 if section!=-99
for nodes_id = last_node+1 to last_node + section_size[section]
/float node_sigma = GRBF_node_sigma(nodes_id)
 /previous_phase = wire_weight(wirebetnodes(nodes_id, first_node(Sequence_State)))
 /current_phase = previous_phase + node_sigma*section_boost*float(nodes_id - first_node(Trajectory) + 1)
 /set_weightsbetnodes nodes_id, first_node(Sequence_State), current_phase
 / very new node+sigma based
set_weightsbetnodes nodes_id, first_node(Sequence_State), section_boost*float(nodes_id - first_node(Trajectory) + 1)
 / new node based
 /set_weightsbetnodes nodes_id, first_node(Sequence_State), section_boost*float(section)
 / old section based
 if section != 2
 // section 2 done above
/current_phase = previous_phase + node_sigma*section_boost*(float(nodes_id - first_node(Trajectory) + 1)-0.5)
 /set_weightsbetnodes first_node(Sequence_State), nodes_id, current_phase
 / very new node+sigma based
 /set_weightsbetnodes first_node(Sequence_State), nodes_id, section_boost*(float(section)-0.5)
 / old section based
 set_weightsbetnodes first_node(Sequence_State), nodes_id, section_boost*(float(nodes_id - first_node(Trajectory) + 1)-0.5)
 / new node based
 endif
 next nodes_id
 last_node = last_node + section_size[section]
endif
 //print "section done = ", section, " last_node = ", last_node
 //pak
 next i
 //next section
return
 /*HH1!1*/encode_half_circle: // new: sigma set here
 /if section > 0
 for i = 0 to section_size[section]-1
theta = start_angle+rotation_dir*float(i)*PI/float(section_size[section])
 theta_next = start_angle+rotation_dir*float(i+1)*PI/float(section_size[section])
x1=r*cos(theta)+xc
 y1=r*sin(theta)+yc
 phi1 = theta+PI/2
x1next=r*cos(theta_next)+xc
 y1next=r*sin(theta_next)+yc
 phi1next = theta_next+(PI/2)*(rotation_dir) //gb
/print "Theta = ", theta
 /pak
 nodes_id = last_node +i+1
set_weightsbetnodes first_node(Position), nodes_id, x1
 set_weightsbetnodes first_node(Position)+1, nodes_id, y1
set_weightsbetnodes nodes_id, first_node(Next_Position), x1next
 set_weightsbetnodes nodes_id, first_node(Next_Position)+1, y1next
 // set_weightsbetnodes nodes_id, first_node(Next_Position)+2, phi1next
 set_weightsbetnodes nodes_id, first_node(Next_Position)+2, sin(phi1next)
 set_weightsbetnodes nodes_id, first_node(Next_Position)+3, cos(phi1next)
//print "node= ",nodes_id
 /print i, " , ", 180*phi1next/PI
 //pak
/set_GRBF_node_sigma nodes_id, max_room_xcm/15
 // set_GRBF_node_sigma nodes_id, 25+r/5 //amsterdam
 set_GRBF_node_sigma nodes_id, 25+r/10 //wellcome
 / for field drawing
 / set_GRBF_node_sigma nodes_id, 20+r/5
 / for trajectory drawing
/ set_GRBF_node_sigma nodes_id, r/3
 / set_GRBF_node_sigma nodes_id,
 / float(nodes_id - first_node(Trajectory) + 1)
next i
last_node = nodes_id
 //print "section done= ", section, " last_node = ", last_node
 //pak
/endif
 return
 /*HH1!1*/build_a_circle:
 return
 /*HH121*/update_network2: //new. Deals with closing the loop
set_node_output first_node(Position), x_wc_predict
 set_node_output first_node(Position)+1, y_wc_predict
 /set_node_output first_node(Position)+2, phi_wc_predict
updategrid Trajectory
 int current_grid = Trajectory
 gosub normalize_winner
 / gosub lateral_inhibition
gosub update_Sequence_State
 / commented out for field drawing
current_section = int(node_output(first_node(Sequence_State))/section_boost)
 print "======> Section nb = ", node_output(first_node(Sequence_State))/section_boost, " <====="
updategrid Special2
/*HS!!2*/ // dealing with loop back to first state
// three states before the last -> switch to the next section
 // (explains the 2.5 and 3 below)
 if node_output(first_node(Sequence_State)) >= (float(net_size)-2.5)*section_boost
 // from the last state, jump to the 12th. Old version.
 // avoiding section 1. Unclear why.
 //set_grid_outputs Special2, 13*section_boost
 //set_grid_outputs Sequence_State, 13*section_boost
 //try top jump to the start of section 1. Amsterdam 09
 forced_next_state= 3-float(net_size)+(node_output(first_node(Sequence_State))/section_boost)
 print "forced_next state = ", forced_next_state
 //print "current state = ", node_output(first_node(Sequence_State))/section_boost
 //pak
 set_grid_outputs Special2, forced_next_state*section_boost
 set_grid_outputs Sequence_State, forced_next_state*section_boost
 updategrid Trajectory
 / new node based
 endif
/print "-----Special2 = ", node_output(first_node(Special2))
 gosub update_Next_Position
/ drawgrid Trajectory, Next_Position, Position, Sequence_State, Special2
 //ug
 return
/*HH1!1*/update_Next_Position: // new
 float act_sum = 0
 // NOTE: take care when the circle closes , 360 = 0 (but av = 180!)
updategrid Next_Position
 //pak
 for k = first_node(Trajectory) to last_node(Trajectory)
 let act_sum = act_sum + node_output(k)
 //printf " %2.20f,", act_sum
 next k
 //print "act_sum = ", act_sum
 //printf "\n"
/ done already in update_Sequence_State called earlier
if act_sum > 0.0
 for k = first_node(Next_Position) to last_node(Next_Position)
 //print "node= ", k, " node_output= ", node_output(k)
 set_node_output k, node_output(k)/act_sum
 next k
 // use sin and cos to calculate the next angle put into the 3rd node.
float sind = node_output(first_node(Next_Position)+2)
 float cosd = node_output(first_node(Next_Position)+3)
/print "sind = ", sind, " cosd= ", cosd
 float delta_phi = arctan(sind/cosd)
if sind >=0 & cosd <0
 delta_phi += PI
 endif
 if sind < 0 & cosd < 0
 delta_phi -= PI
 endif
set_node_output first_node(Next_Position)+2 , delta_phi
else
 print "low activity limit"
 / pak
 endif
 return
 /*HH1!1*/update_Sequence_State: // new
 float act_sum = 0
 /float act1 = node_output(first_node(Sequence_State))
updategrid Sequence_State
 float act1 = node_output(first_node(Sequence_State))
 for k = first_node(Trajectory) to first_node(Trajectory) + net_size-1
 let act_sum = act_sum + node_output(k)
 next k
 /pak
 k = first_node(Sequence_State)
 / if act_sum > 0.000001
 if act_sum > 0.0
 set_node_output k, node_output(k)/act_sum
 /my_sound 100,100
 else
 /pak
 set_node_output k, act1
endif
 return
 /*HH1!1*/angle_diff:
float sind=(cos(angle2)*sin(angle1)-sin(angle2)*cos(angle1))
 float cosd = (cos(angle2)*cos(angle1)+sin(angle2)*sin(angle1))
 float local_tan = sind/cosd
 float angle_difference = arctan(local_tan)
if sind >=0 & cosd <0
 angle_difference += PI
 endif
 if sind < 0 & cosd < 0
 angle_difference -= PI
 endif
 angle_difference =-angle_difference
 return
/*SH1!1*/
 /*HS!!1*/get_sonar_data: //for self localization -used?
 //graph_active Sonar
 //graph_clear
keyboard = port_in(96)
 // NOTE: Somehow this read fools the power saving system
 // as if a key had been pressed and there is no more slowing
 // down of the data reading after a few seconds.
 // port 96 is the keyboard chip 8255A (hex 60).
 //print keyboard
/ ms_delay 40
 sonar = 3
 gosub fast_get_sonar_range
 / ms_delay 40
sonar = 1
 gosub fast_get_sonar_range
 / ms_delay 40
sonar = 5
 gosub fast_get_sonar_range
 / ms_delay 40
sonar = 7
 gosub fast_get_sonar_range
 / ms_delay 40
/for sonar = 1 to 8
 / gosub fast_get_sonar_range
 /next sonar
sonar_d1 = sonar_range[1] - sonar_y[1]
 sonar_d2 = sonar_range[3]- sonar_x[3]
 sonar_d3 = sonar_range[5]+ sonar_y[5]
 sonar_d4= sonar_range[7]+ sonar_x[7]
 return
 /*HH1!1*/fast_get_sonar_range: // ibm thinkpad
// sonar_range[sonar]=200
// gosub get_sonar_range //gb test
 //return
ms_delay 30 //20 ...10ms for 3m
 int mask = 2^(sonar-1)
 int echo_on = 1
 int iix, iiy
port_out portb, mask
/======================
 //ms_delay 1
 port_out porta, mask
for i = 0 to max_sonar_array-1
 echo[i]=0 //gb 130811
 if echo_on =1
 //echo[i]=port_in(portb)
 for iix=1 to 62 // 50 //to create a delay between port_in calls
 iiy+=iix // needed by ibm thinkpad
 next iix //this delay also shifts the palce of the echo in the array.
 // with 62 ittreaqtio and the factor 1.42, we get the right distance...
 // with 50 ittreaqtio and the factor 1.17, we get the right distance...
 echo[i]=port_in(portb)
 endif
 if echo[i] = mask & echo_on=1
 echo_on=0
 echo[i]=port_in(portb) //run once
 endif
 if echo_on=0
 echo[i]=mask //to simulate the rest of the inputs
 endif
 //print echo[i]
 next i
ms_delay 1
port_out porta,0
 /=====================
int i_dist = -1
 for i = min_range_check to max_sonar_array-1
if echo[i] = mask
 if i_dist = -1
 i_dist = i
 i = max_sonar_array
 endif
 endif
 next i
sonar_range[sonar]=float(i_dist)*1.42 // 1.17 //1.19
 //print "sonar = ", sonar, " range = ", sonar_range[sonar]
 return
 /*HH1!1*/sonar_scan8:
while 1
 graph_active Sonar
 graph_clear
 keyboard = port_in(96)
for sonar = 1 to 8
 //sonar = 5
 gosub get_sonar_range
 //gosub get_sonar_range_old
 //ms_delay 400 //50
 next sonar
 endwhile
 return
 /*HH1!1*/get_sonar_range: //new test stop reading echo when echo received, gb 27.6.2012
 float dt
 //sonar = 5 //gbtest
int mask = 2^(sonar-1)
 //int mask = 2^(5-1) //gbtest only read from 5 - in front
 int echo_on = 1
 int iix, iiy
 ms_delay 30 //gb 10ms = 3m
 port_out portb, mask //reading address
 //ms_delay 1
/init_timer
 /======================
 //int mask = 2^(sonar-1) //gbtest
port_out porta, mask //ping out
for i = 0 to max_sonar_array-1
 echo[i]=0 //gb 130811
 if echo_on =1
 //echo[i]=port_in(portb)
 for iix=1 to 50 //1 //to create a delay between port_in calls
 iiy+=iix // needed by ibm thinkpad
 next iix //this delay also shifts the palce of the echo in the array.
 // with 61 ittreaqtio and the factor 1.386, we get the right distance...
 // with 62 ittreaqtio and the factor 1.42, we get the right distance...
 // with 50 ittreaqtio and the factor 1.17, we get the right distance...
 echo[i]=port_in(portb)
 endif
 if echo[i] = mask & echo_on=1
 echo_on=0
 echo[i]=port_in(portb) //run once more
 endif
 if echo_on=0
 echo[i]=mask //to simulate the rest of the inputs
 endif
 //print echo[i]
 next i
 ms_delay 1
port_out porta,0 // reset all inputs?
/=====================
 /dt = elapsed_time(1)
/-----
 int i_dist = 0
 /init_timer
 for i = min_range_check to max_sonar_array-1
color = GREEN
 / if land(echo[i],mask) = 0
 if echo[i]=0
 color = RED
 endif
/ if land(echo[i],mask) = mask
 if echo[i] = mask
 color = YELLOW
 if i_dist = 0
 i_dist = i
 / i = max_sonar_array
 endif
 endif
move float(i), float(sonar+1)-0.3
 line float(i), float(sonar+1)+0.3, color
next i
//print "Sonar ", sonar, ", range = ",i_dist, " (dt1=",dt," dt2=",elapsed_time(1),")"
 print "Sonar ", sonar, ", range [cm] = ",float(i_dist)*1.17 //1.389
 if sonar = 8
 print " "
 endif
 sonar_range[sonar]=float(i_dist)*1.17 // 1.19
 return
 /*HH1!1*/get_sonar_range_old:
 float dt
 int mask = 2^(sonar-1)
 int iix, iiy
port_out portb, mask
/init_timer
 /======================
 port_out porta, mask
 for i = 0 to max_sonar_array-1
 echo[i]=0 //gb 130811
 echo[i]=port_in(portb)
 for iix=1 to 10 //to create a delay between port_in calls
 iiy+=iix // needed by ibm thinkpad
 next iix
 //print echo[i]
 next i
port_out porta,0
 /=====================
 /dt = elapsed_time(1)
/-----
 int i_dist = 0
 /init_timer
 for i = min_range_check to max_sonar_array-1
color = GREEN
 / if land(echo[i],mask) = 0
 if echo[i]=0
 color = RED
 endif
/ if land(echo[i],mask) = mask
 if echo[i] = mask
 color = YELLOW
 if i_dist = 0
 i_dist = i
 / i = max_sonar_array
 endif
 endif
move float(i), float(sonar+1)-0.3
 line float(i), float(sonar+1)+0.3, color
next i
//print "Sonar ", sonar, ", range = ",i_dist, " (dt1=",dt," dt2=",elapsed_time(1),")"
print "Sonar ", sonar, ", range [cm] = ",float(i_dist)*1.389
 if sonar = 8
 print " "
 endif
 sonar_range[sonar]=float(i_dist)*1.19
 //sonar_range[sonar]=float(i_dist)*1.19 //values for old laptop
 //endif
 return
 /*HH1!1*/sonar_plot:
 /-------sonar plot------------
 // ------------ erase old data-------------
 // printf "x_wc= %f, y_wc= %f, phi_wc= %f\n", x_wc,y_wc,phi_wc
 // print "phi_wc = ",phi_wc*180/PI
 color = 0
 // 0 = black
 for sonar = 1 to 8
 xs = old_x_wc + sonar_x[sonar]*cos(old_phi_wc-0.5*PI) - sonar_y[sonar]*sin(old_phi_wc-0.5*PI)
 ys = old_y_wc + sonar_y[sonar]*cos(old_phi_wc-0.5*PI) + sonar_x[sonar]*sin(old_phi_wc-0.5*PI)
 win_move xs,ys
 xs = xs + old_sonar_range[sonar]*cos(old_phi_wc-0.5*PI + sonar_dir[sonar])
 ys = ys + old_sonar_range[sonar]*sin(old_phi_wc-0.5*PI + sonar_dir[sonar])
 win_line xs,ys, color
 next sonar
 //print "erased old sonar"
 //pak
 // -------------new data-------------
 color = 5
 // a level of grey
 for sonar = 1 to 8
 xs = x_wc + sonar_x[sonar]*cos(phi_wc-0.5*PI) - sonar_y[sonar]*sin(phi_wc-0.5*PI)
 ys = y_wc + sonar_y[sonar]*cos(phi_wc-0.5*PI) + sonar_x[sonar]*sin(phi_wc-0.5*PI)
 win_move xs,ys
 xs = xs + sonar_range[sonar]*cos(phi_wc -0.5*PI + sonar_dir[sonar])
 ys = ys + sonar_range[sonar]*sin(phi_wc -0.5*PI + sonar_dir[sonar])
 win_line xs,ys, color
 old_sonar_range[sonar] = sonar_range[sonar]
 next sonar
 /pak
 return
 /*SH1!1*/
 /*SS!!1*/
 /*HS!!1*/find_min_max_act:
 float max_act = 0
 float min_act = 1
 int max_node=0, min_node=0
 for nodes_id = first_node(current_grid) to last_node(current_grid)
 if node_output(nodes_id) > max_act
 max_act = node_output(nodes_id)
 max_node = nodes_id
 endif
 if node_output(nodes_id) < min_act
 min_act = node_output(nodes_id)
 min_node = nodes_id
 endif
 next nodes_id
return
/*HH1!1*/normalize_winner:
 // boost all by factor such that the winner has
 // an activity 1 and the smallest an activity 0
gosub find_min_max_act
// boost
 for nodes_id = first_node(current_grid) to last_node(current_grid)
 set_node_output nodes_id, (node_output(nodes_id)-min_act)/(max_act-min_act)
 next nodes_id
return
/*HH1!1*/lateral_inhibition:
 // inhibit all by average activity of active nodes
 // find average
 float av_act = 0
 float node_number = 0
 for nodes_id = first_node(current_grid) to last_node(current_grid)
 if node_output(nodes_id) > 0.01
 av_act = av_act + node_output(nodes_id)
 node_number = node_number + 1
 endif
 next nodes_id
 av_act = 0.8*av_act/node_number
 // to prevent problems when all active nodes have the same output
if node_number = 1
 av_act = 0
 endif
 // subtract average
 for nodes_id = first_node(current_grid) to last_node(current_grid)
 if node_output(nodes_id) > av_act
 set_node_output nodes_id, node_output(nodes_id) - av_act
 else
 set_node_output nodes_id, 0
 endif
 next nodes_id
 return
 /*SH1!1*/
 /*#CP=29256:XX=99*/

VA

VA

The Virtual Advisor:

{Published in Mediaspace 4. 1997}

An Online Advisory & Support Environment for Staff, Students & Industrial Partners.

Academic Programmes with vocational components (sandwich courses) have to maintain a continual dialogue with Placement companies in order to monitor industrial trends and preferences. This constant background activity (which includes frequent visits) places a relatively unrecognised strain on programme teams, especially as many companies are located at considerable distances from the home University. Students are often concerned about their level of personal skills and the relevance of their programme of study to employers, and are constantly seeking first hand information about the ‘real world’. The ‘Virtual Advisor’ (VA) recognises a need to provide a mechanism to allow staff to enter into a continuous interaction with new and established companies, and to encourage a critical dialogue between students, staff and prospective employers.

The underlying aim of the design team was to create an ‘active’ facility that would be in a state of continual use and dynamic change. As there was an obvious need for the VA, it was important to make the system pleasurable to use, and encourage a high level participation. Initially emphasis was placed on designing a ‘location’ that would mimic or be reminiscent of a ‘real’ location. By seeing the real location the user would be reminded of the discourse underway in the simulated environment and be encouraged to enter into further discussion. However, the ‘VA’ has a real geographical location (the location WWW server) which is a considerable distance from the primary target audience (London), it also aims to support a national and international audience. It would be difficult, if not impossible, to identify a location that would have universal appeal, and be visible from all areas of the globe. An off world location was finally chosen, a set of coordinates, a cluster of stars, between the horns of Taurus, in an area of high radio activity in the Crab Nebula. Look to the skies and imagine the ‘VA’.

As the project progressed it became apparent that the significant design issues were not the visual aspects of the site, but the processes and functions that needed to be built in to enable a process of discussion to take place. And, although the concept of a location ‘somewhere out there’ (certainly not in the server room in Plymouth) is quintessential to the project, emphasis changed to to the development of a ‘process’. The graphical elements are reminiscent of network diagrams, nodes and branches, a process of connection. The 3D renderings support the notion of a physical space, but the abstract structure place significance on linking, travelling from A to B, and passing through the system.

Sections:

Project: this section allows students to submit an illustrated proposal for their final Stage Project (contributes to a third of their degree classification). This can be seen by staff and other students. Updates can be submitted at several points throughout the Final Stage.

Experience: allowing students to enter their experiences from placement and post Programme employment, (issues tend to focus on management and production methods, client/company relationships, anecdotes, and technical experience.

4D Conversation: a layered discussion, allowing participants to maintain a multi-user conversation back linking to previous entries.

Think Tank: accepts proposals, ideas and problems from external advisors. These can then be developed by student production groups.

Production Team:

Will Bakali, graphics, scripting and server control;

Mike Phillips and Chris Speed, concept and system design.

Many thanks to:

Brian Eno

Andy Finney (The Independent Multimedia Company)/

Mic Cady (Dorling Kindersley Online Publishing), AA Multimedia/

Rob Morrison (Silicon Graphics).

Tech:

The VA was built on an Apple Mac WWW server, using WebStar and Netforms, CGI scripts, HTML, Shockwave (Director), Java Script, Photoshop, Netscape, etc….