VR Collab #10 – The remake

There were a lot of new things to get in our remake. I will deconstruct it through 3 sections: sound effects, environments and music.

Sound effects

Star
The first thing you notice in the game is a star swirling and popping at the end. It would fail me if that action didn’t have a sound! Here are the sounds I’ve put together:

The teleportation
I think one of the things that confused me the most was the absence of a sound for the player’s “steps”. The fact that the player was silent throughout the game gave me the feeling that the player was invisible. By adding a sound I can open the window for the player to imagine what his figure will be like. This sound was also shared with Cai and Elliot for them to use in their editorials.

Portals
I added two different portal sounds so the player could distinguish between them. I also made sure to add an inverted version for each time the player exits and enters through a portal.

Penguin
Finding this sound was really tricky. How would I find a sound of a penguin dancing? So I found a sound on freesound.org from an individual who could imitate funny sounds. And interestingly enough, it works (I think).

Torch
There are at least 4 things you can sound in this torch: its handle, ignition, fire crackling, and release. All of these have been guaranteed.

Rims
The hoops have a peculiar sound that can only be heard after the player has entered the last portal. It’s an electric sound that comes and goes, giving the feeling of movement. I don’t think it was the best addition.

Ambiences

There are 2 prime environments that can be heard throughout your scene while the player is on the platform, and another that reminds the player that he is flying.

Music

What the music really lacked was its spatialisation. When I initially made it, it already had enough reverberation to do justice to the genre. In this remake, it has a thousand times more! I used a convulsion reverb with a church preset.

VR Collab #9 – Thoughts on the final cut

Rita and Laura had different deadlines, so the workflow was different from what I expected. Up until the time of the launch, I had no idea what the game would look like. Me, Cai and Elliot, we only had the references given to us through the PowerPoint and the conversations we had directly with them, either by video call or face to face. I still remember being present at the ADR recording for the penguin, but we still didn’t have a pictorial vision of what this would look like. I was amazed to learn that the most interactive part was the part my partner Cai was working on and the look of the scene I worked on myself. I will leave my opinion on each scene in this post and talk about the sound itself.

Winter Wonderland (sound design by Cai Pritchard)

First of all, the look of the game itself is strange, and I still have difficulty understanding what kind of age it’s supposed to be for. I remind you that the main idea of the game is meditation.

I think Cai’s Scene was quite well done by him. I think the sounds work perfectly. The scene doesn’t require much sound design either. In conversation with Cai, I noticed that he found the most difficult was the music production since we both are not “sonically” made for that.

What I liked the most in Cai’s Scene can be seen in minute 7:20, in which the player is asked to do a breathing exercise. That is the only moment I can imagine the player having a meditation-like experience. At this point, the player is immersed in a quiet field recording, and I think that should be the case for the whole scene. The music works well as a characteristic element of the environment in which it is inserted. Still, I think it doesn’t help that much to achieve that end when putting the meditation motif in perspective.

Trippy scene (my sound design)

Watching my scene, I was saddened to learn that there were thousands of elements that could be sounded. I was asked for two sounds, the music and haptic sound, but only the former was used. Even the sound has no treatment whatsoever: there is no spatialisation, no 3D emitter that can be identified. It’s like a stereo recording coming out generally through the headphones.

It makes me think that possibly if I had had access to my scene before, it would have given a very different experience to the final result. I didn’t know, for example, that my scene was set in space and that there were giant hoops surrounding a platform made of crystal, with portals and many other interactive elements.

These are all considerations that I will have when the remake is made. I hope I can give the scene the sound it deserves.

Zenrappy (sound design by Elliot)

I believe this scene is the best achieved. Compared to the previous ones, this one is much more balanced. The interactions are not unmediated, and the surrounding space is not too extensive. However, I also feel some imbalance in the amount of time the player would have to dedicate to play this part. The player would not spend more than 2 minutes on this part of the video analysis. Compared to the Winter Wonderland scene, the player could happily spend 10 minutes.

What I think works well in this scene is the sound of the water in contrast to the gong. Technically, there might be a problem with Elliot’s chosen recording, as you can hear a plane fly by. Other than that, I think it works well.

There’s a lot more to explore in these scenes in terms of sound, which I’ll have to discuss with my colleagues. Unfortunately, we won’t be able to get our games into unity or Fmod, but together we can come to more logical conclusions to get a proper job.

VR Collab #8 – Making what was requested

In this post, I will deconstruct the entire production of what was asked of me through an analysis of samples and techniques used

To recap what was asked of me:

  • a “trippy” vaporwave song, but one that isn’t sad or “weird”.
  • a haptic sound effect for when the joysticks bump into each other

Music

After the whole process, I’m writing this post and haven’t found any possible name for the song yet. What is true is that I had a lot of fun in the making. I feel like I’m not that bad of a music producer. I don’t have ANY musical skills, but I get by! It’s all down to my editing and sound design skills. I also owe my intense research into the music genre to the score. It is essential to mention that there is much more to explore within the musical genre; there are other exciting subgenres born from vaporwave, such as mallsoft.

On the first production day, I booked the Composition Studio and invited my colleagues Sam Knobbs, Harry Charlton, Hywel, and Jack Centro as I thought it might be fun. I had no idea which samples I was going to use. I was at ground zero in the creative and technical process.

I tried to make a vaporwave song in 10 minutes (failed). This was the result: I put a lot more work into the project than I thought I would. It ended up being a lot more! As you can see, I only had one audio track in that Ableton project.

I’ll start by talking about the samples I used.
In that session with my friends, I concluded that the sample I would use would be from the song “Soup for one” by Chic, released in 1982.

I know this song from a better-known song, “Lady, hear me tonight” by Modjo, a French House group hit from 2000.

I thought the song’s tone was upbeat enough to escape the inherent sadness in the music genre. With this sample, I would achieve one of the most essential aspects of vaporwave – recognising the sample to take the listener to the past. I think it worked.

I used a Kings of Tomorrow song with Julie McKnight, “Finally”, for vocals. However, the voices are not so clear, as they are under a chopped and screwed process and a lot of reverb.

Finally, I used a sample of a Soft hair song entitled “In Love” on the drums. The drums give a drunken feel and make an excellent inclusion to the mix.

I’ve only sampled the beginning of the song.

Outside the music scene, this upcoming sample is the most important. It adds the drama and fatalism you were looking for, giving it an elegant romantic touch.

I currently find myself re-editing the song and hope to add more samples. One of them is “Careless whisper” by George Michael, to give it a twist.

Here is the final result:

Sound Effect

I could not do much research or analytical interpretation of the sound effect. All I have to say about it is that I compiled about 4 sounds to achieve one sound. My goal was simple – to make a fun but strident sound gave the context in which it is set. When I made the sound, I wanted it to sound like a little star. Here’s the result:

Sounds were sourced from sound libraries and Arturia’s Analogue Lab

VR Collab #7 – My favourite sound implementations into games – Alien: Isolation

In the previous blog post, I talked about the process of developing sound design for video games, and so I thought it was pertinent to talk about my favourite sound implementations. In my opinion, what takes audio for video games to another level is programming skills and the characteristics of multimedia itself. This factor reminds me a lot of the concepts covered in sound art exhibitions in January. In that module, I learned what is called, according to Adam Basanta, the fifth dimension – interactivity. Realising which factors in our creation can influence the experience of the “mobile” listener makes a total analogy with the “audio listener” of video games. A good sound design or sound designer is one who, through a very broad knowledge of creative sound applications, manages to take the player/audio listener to other levels of understanding of reality.

In this publication I will address implementations that fascinate me both from a technical and a personal perspective. There are implementations that are technically complex, but the result is quite simple or vice versa.

The sound of Alien: Isolation

In researching information to talk about the aural experience the player has during the game, I came across terms I had never come across before. “Sound Engine” or “Sound AI”. Terms that are apparently pioneered in this game, especially the latter. For Valkyrie Sound, a sound designer for video games, Alien: Isolation, has “has the best sound design in any video game”. He explains that the game sound design blurs the line between the biological and the mechanical between music and sound effect, the mundane and the horryfying and between game and film.

There are several elements I would like to talk about this one especially in terms of my personal experience. Alien: Isolation is a horror game that bases its narrative on the space station Sevastopol, which is in total chaos after a spread of the xenomorph bacteria, an ancient biochemical weapon that invades humans and uses them as a foundational base for the creation of a killer monster popularly known as the alien. The main character of the game is the daughter of Ellen Ripley (who starred in the 1970s by Sigourney Weaver) – Amanda Ripley. She enters the station after having an accident on her spaceship. Initially, little information is given about what is happening on the station, but noticeably something very wrong is going on. However, and through dramatic irony, the player knows what is coming.

It is through this irony that the entire sound design is based. The player, from the very beginnings of gameplay, is encapsulated in a cloud of tension that is impossible to escape. With its 8000+ sounds, Alien: Isolation emerges the player in layers and layers of chronic anxiety, where anything that moves has instinctive repercussions that exalt fear in a brilliant way. The game is endowed with mechanisms and techniques that make this possible.

Ambiences

As mentioned earlier, there is a correlation between the technological and the biological. In Sevastopol, the ambient sounds are a biological representation of a mechanical being. The space station breathes, shakes, freezes, sneezes. All this in a subtle but fearful representation. This environment, however, is not constant. It is a sound that mutates according to the player’s movement, intensifying and attenuating at times.

Sound Engine and the Alien AI

The game is very dark and the sound is, I would say, 70% responsible for that. The remaining 30% is the image of the alien that takes less than 3 seconds. The sound, on the other hand, takes a lifetime to fade. This is due to the existence of a sound engine programmed in Unreal 4. The sound adapts to the circumstances in which the player finds himself. If it’s dark, the music gets worse. If the player is hiding, panic sets in. Collaterally to the sound world of the game, there were also innovations in 2014 at its release regarding the AI device inserted in the alien. This AI recognises the player, and studies patterns of conduct that they normally have in their gameplay. I would say that the player has no chance of surviving in 70% of the occasions that the alien is in the same place as him, and 99,9% once he has observed it. Only sound can save the player.

This scene is an excellent representation of what I’m talking about. The player is in the first hour of the game and after a cutscene where a character is killed by the alien, without Amanda realising exactly what form he is in (dramatic irony), she flees in a panic to survive. Meanwhile, she waits for a train that will take her to another side of the station, but it’s quite late, and the player can do nothing but wait.

Radar vs. Footsteps or Tech. vs Bio

One of the most classic moments in the game is the dialectic you created between the alien detection radar and the sound of the alien itself. Everything is interconnected. The alien’s AI is sensitive to sound, so any sound produced by the player will attract the AI to it, causing the player to die. This radar, called in the game “motion tracker”, however, emits the most traumatizing sound that can be more terrifying than the sound of the monster itself. As the alien approaches, the radar beeps more and more, but sometimes the sound is misleading and the alien ends up not appearing. It’s a somewhat unfair mechanic, where the player can’t control it. The player just suffers.

VR Collab #6 – Understanding the dynamics required for sound design for video games

In this post, I will reveal some insights I have gained through analysing the process of implementing audio in video games.

First of all, it is essential to understand the video game scenario. In terms of means of production, there are two types of multimedia:

  • Non-Linear Media: Not presented sequentially or chronologically. In this type of multimedia, the interaction of its user is necessary, like video games or even web pages examples of this.
  • Linear media: in this type of multimedia, information is retained or observed in a continuous sequence. It is not a format in which one can interact. Typically, these presentations begin at a predetermined point in the same way they end. A clear example of linear media is a Powerpoint presentation or a film.

So, where does audio implementation come in? A practical concept called 3D emission is the basis of all sonic magic. Basically, the sound designer is presented with a world where he has to sound. He has to geolocate several small speakers with different, programmable sounds. These speakers are called 3D emitters. For the emitters to be activated, you need an “audio listener”. By audio listener, you mean a virtual pair of ears that picks up the sounds from 2D and 3D emitters. What is the difference between 2D and 3D transmitters?

2D: the audio output comes from the headphones/audio listener. It means no spatialisation of sound and no geolocation in XYZ.

3D: means that volume and panning are modified depending on the distance of the audio listener from the transmitter.

Room tones, ambiences, and music are usually programmed with 2D sounds. However, the creative point of these two parallels is the possibility of transitioning from 2D to 3D. A classic example of this type of transition can be found in the mythical game GTA V (and other versions). When a player breaks into a car, before getting into it, you can hear the sound of the radio spatialised. But when the player gets into the car and starts driving it, the radio is programmed with a 2D sound.

How do you trigger a sound?

There are several ways to do this:

  • By pressing a button
  • Collision
  • Values (numbers)
  • Animation notifiers
  • Proximity

In this way, the sound designer has to find a way to reach the programmer to achieve his goals since they run different languages in their development processes.

According to Sam Hayhurst, game programmer, the first thing that unravels in conjunction with the sound designer is “what are we trying to achieve together?” “Everything I do later on is shared via google docs, where I get feedback from part of them (sound designers).” In this document, the information is written as follows: when x happens, sound y is played. Hayhurst also mentions that active dialogue between the two is necessary and avoids group chats as much as possible, which can be a slowing factor in the audio programming process.

Succinctly here’s what a sound designer should ensure when collaborating with a programmer:

  1. Explain precisely and in simple terms what is intended concerning the sound event.
  2. Make it clear when you want to make changes to specific audio.
  3. Clarify whether it’s a prototype that might be thrown away or if it’s a feature that’s here to stay.
  4. Not being afraid of asking.
  5. Speak face to face or call because lots of information can be lost in a text
  6. Take the time to get to know each other and build a stronger working relationship.
  7. Don’t tell the programmer how to do their job.
In minute 3, the narrator explains the new audio editing features that Unreal Engine 5 has.

How do sounds get into games?

To put sounds into a game, we have two options: either importing them directly into the platform (unity, unreal engine, etc.) or through a tool called “audio middleware”. Then we create “audio events”, also called “sound cues”, and put the sounds we want in them. These events act as containers, and these can have an unlimited number of sounds. However, there are playback instructions. Imagine the following scenario: A character has 20 different step sounds, and we as sound designers can define if the playback is random, we can modulate the pitch and, also, the volume. But this is just the surface of it. You can also modify the following parameters: obstruction/occlusion; rooms/portals; attenuation; switches; states; RTPCs.

So. What is the difference between using audio middleware and integrating sounds directly into the engine?

Middleware tools like Fmod and Wwise are 3rd party tools which sit between the game engine and the audio hardware. Learning audio middleware is a daunting task, but understanding the basics is enough to understand the rest of the features.

There is usually the role of audio programmers in bigger productions, which they’re underrated figures. Game engines like Unity or Unreal Engine are more limited. However, EPIC GAMES, the developers of Unreal Engine, have put a lot of work into improving audio editing on the platform. Other publishers have their own engines. Ubisoft works with Snowdrop and EA with Frostbite Engine.

Implementation is not the last step in the process of sounding a game. It must be taken into account that:

  • No matter how good the sound is, the player will never hear it if it’s not in the game.
  • The way the sound will play back usually shapes the sound’s design and the production pipeline.
  • Early on, collaborating with the design and programming teams to create audio systems for specific gameplay mechanics will ensure that the audio group can deliver their best work and won’t simply be left behind.

VR Collab #5 – Seeking spiritual inspiration in vaporwave music made by an anime fanatic.

To finish my drift through synth wave and vaporwave albums, I introduce Desired. Desired does not have as much relevant information available for analysis compared to the artists mentioned above. The only sources used are streaming or record reselling platforms – Soundcloud, Spotify, and Discogs. However, the music content of Desired is very close to what I intend to produce for Virtual Aeffects.

Desired is a young Russian from Ekaterinburg who is a fanatic for anime – visible through the constant reference on his album covers to anime characters – nicknamed “sailor Senshi” and “Saturn genesis”. He belongs to the group “Sailor Team”. Their inspirations apart from anime are the Japanese culture of the 90s or music genres in vogue at that time, such as French House, with names like Daft Punk and Modjo. His music is totally sample-based and takes the listener to lo-fi vaporwave, future funk and French House aesthetics. Many of the tracks in Desired remind me of the mythical Modjo song “Lady, hear me tonight”.

Japanese Culture in the 90s. Why is it so influential?

I have read a few articles on Japan. The country has become a fad over the last 20 years. Many people fancy Japan as an ideal place to boost their personal qualities. It’s like an exotic paradise for Western Europeans and Americans. Talking about Japan is like a futuristic impossible, technological, infinite dream. Over the years, the medieval idea of Japan and the Japanese has faded away as Western-inspired movies no longer depict samurai and ninjas and sword sacrifices, and pictorial paintings of them, with fish and almond trees, Mount Fuji, and traditional dresses. This turnaround is no doubt due to the Japan of the 1990s.

It seems that the country is encapsulated at that time. Many creatives take advantage of this to draw inspiration from these motifs, which are figuratively present although far away in time. In this way, I present some musical and cinematographic artefacts that help build this mythical idea of cultural Japan that lives in many artists.

Fishmans is a band that no longer exists in the current Japanese music scene. They were an Alternative Rock group from the 90s who released, as they say, a cult album appreciated by quite a few critics. That album is Long Season, a song album released in 1996.
A film that has aged poorly but undoubtedly served as inspiration for quite a few film productions. Apart from that, several elements characterise Japan’s technological culture are present.

Spending the day strolling along Akihabara highstreets will give you a delightful dash of nostalgia. Businesses that you thought were dead and buried are still going in Japan, like DVD rentals and music stores. 90s retro video games also kept gaming classics alive, reminding us of our childhoods glued to a screen.

Richard Young in “Japan in the 90s: Still Alive”

This nostalgia that Richard Young speaks of is what Desired seeks in its music – the exaltation of an era and its eternalisation.

Exploring some songs

Sunshine Aerobics is the introductory song to the album Lovestory (2017) and perfectly mirrors elements of vaporwave music, with the twist that Desired likes – energetic, humorous, and frenetic. Obviously sampled, the lyrics portray the same love content that David Bruno addresses, which reaches for kitschy tastes. Another peculiarity of this song is the abuse of the saxophone – filled with reverb.

Sixth track from the same album, but this time with more French House-oriented vibes – loops of the same sample and something of a manipulation. There’s something Desired doesn’t seem to like to do, though, which is the chopped and screwed techniques that traditional vaporwave has.

CISA #10 – Final Considerations about my sound piece

In my dissertation, I address the folklore problem that happened in Portugal, which held many people in the bonds of the Estado Novo. After spending an entire week listening to Michel Giacometti’s seemingly endless work, I made the creative decisions necessary to develop a sound piece cohesively connected to the topic I addressed in my essay. Giacometti allowed the emancipation of the people through recording and listening in this way. Intellectuals came to have a broader social conscience. They were taught to listen to the programmes launched by RTP and were able to make their critical interpretations of the world in which they lived from 1970-to 74.

In my sound piece, I invite the listener to travel through countless field recordings, juxtaposed, that have no reference to the moment, nor the area of the country, nor to an objective context, but involve them in an immersive human and warm experience. We hear the voices of the works. We listen to the women in tune and often in dystonia. We listen to a panoply of anthrophonies, biophonies and geophonies, as true sonic journalism should be done. 

https://soundcloud.com/notabutabu/giacometti?utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing

I am not entirely dazzled by the work I have achieved. There is a lot to do yet. There were a lot of creative dilemmas to solve. Firstly, I thought of putting together all the selected moments from the filmography and making a kind of turntablism. I inserted different audio on different audio tracks, from mere field recordings to Giacomettian recordings. The idea would be to mix them into a performance that I would record. Unfortunately, it failed. The poorly recorded result remained, and the process slowed down what could have been the initial solution – editing. Editing ethnomusicological work is not an idea I like. I felt it would lose the whole mystique of the performance. However, it didn’t fail as drastically as I thought it would, and it turned out to have a compensatory result. 

I also had the idea of recording it on a Nagra IV-S, the same one Giacometti used, hoping to romanticise the process and make the piece even more epic and praiseworthy. I dismissed the idea due to a lack of resources and expertise, but I don’t disregard it for the near future. A Nagra IV-S in my hands would make me feel what many sonic journalists felt – the pressure of not having enough tape, the suspense of not having a quality recording, among other concerns I am unaware of. In a final thought on tape recorders, I would like to discuss in the future what the inherent creative impediments of both typical sound recorders – analogue and digital – are in a similar context to Giacometti.

Returning to my piece, it contained around 20 samples, all of which came from the work recorded by the Corsican, except for the final moments where I dramatise the culmination of the piece by introducing the idea that the tape has run out, with sounds and drawing by me. What does it symbolise? Well, it is an ode. An ode and a summary of everything I know about this man’s work which I hold so dear. The samples in question are also all from the People Who Sing series, as they are the only ones I have in my range, excluding The Net Wing. They are:

  1. Fragments of a musical enquiry in Penha Garcia (1971)
  2. The viola campaniça and the despique in Baixo Alentejo (1971)
  3. Work songs (1972)
  4. Work songs II (1972)
  5. The Stone – Póvoa de Lanhoso (1973)
  6. Workshops, Estorãos (Ponte de Lima) (1973)
  7. Workshops, São Martinho (Ponte da Barca) (1973)
  8. O alar da rede (1962)

CISA #10 – Reflections on making a sound piece

After several listens to Giacometti’s work, I feel I am an expert. I know a great deal of his work – it would be impossible to know everything completely, so I can identify what kind of recordings are interesting to my ear. After the class I took with Mark Peter-Wright on how to drag one sound to another, I was very inspired to manipulate the work of the ethnomusicologist to another level. What kind of considerations might a piece that manipulates someone else’s work have? How can I justify my ideas through editing? For the moment, I recognise that my intention regarding Giacometti’s work is to perpetuate his work and praise him. This obsession seems endless; it will become even more eternal by revealing concepts like repetition, juxtaposition, and extension in a sound piece. I don’t want to damage the subject’s voices. But I want to raise them to other levels of understanding. When we mix two Chitas (singers of traditional Beira music who became famous with Giacometti’s recordings), what can happen? What can happen when, apart from the Cheetahs, we put the masons of Lanhoso to the noise? And the campaniça guitar? And the norias? And the wind? And the hoes? And the tiredness of men? And the fishermen of Portimão? And the wool handlers? And the ladies’ songs? What sound projection could all this amalgam of sounds have?

In class with Mark, I was dazzled by the work of Maria Chavez, whom I already knew but had never connected with Peter Cusack’s sonic journalism. In essence, this pairing is a kind of archival journalism, which alludes a bit to the work of Syma Tariq. Is it possible, without speech, to demonstrate the social problems of fascist Portugal in such a piece? What if I fail? What if I fail to achieve my goal? I believe that such wisdom can only be achieved by doing! In another class with Mark, I learned that with practice, you realise, reminding me of Francis Alÿs’ work El Ensayo, where a car tries consecutively to cross a hill but fails and turns back.

El Ensayo

It is essential to recognise that perhaps Giacometti’s best field recordings are undoubtedly the Working Songs. This is for a simple reason – music is no longer the ethnomusicologist’s main reason for collecting. There is a social issue surrounding the recording. The social reality cannot be denied, just as in post-World War II neorealism. In front of our ears, we can consider thousands of reasons for judging a society. For example, the men pounding the stone in Póvoa do Lanhoso was done the same way a century ago. However, we are in the middle of the second half of the 21st century, and here they are, the last specimens of that sound register, which most probably in other societies would already be extinct. They are sounds that seem from a very distant past but were actually recorded 50 years ago. Nowadays, these same sounds no longer exist. They exist performed in case the elderly person who knows how to break the stone in that way is asked.

CISA #9 – Documentation as Production AKA: How to get from one sound to another

What is my research interest for this project?

  • acoustemology
  • politics of the voice
  • music and conflict 
  • ethnographic methodologies review.

Mark Peter

He’s interested in the relationship between humans, animals, and technology and the power in that triad. Also interested in pedagogy – figure stuff out together. He works within different media: installation, performance and radio. Listening After Nature: Field Recording, Ecology, Critical Practice is his upcoming book. 

Questions for the class:

  • What artistic histories might your sound work connect with
  • How will narrative function
  • How will voice function
  • What type of editing does your content demand
  • Where are you at work?

Analysis of Hildegard Westerkamp work Kitts Beach Soundwalk

  • voice is the command of a change in the piece
  • playing with truth and fiction
  • “recording sound is subjective” Westerkamp.
  • Poetic doc.

Analysis of Glenn Gould Contrapuntal Radio/Editing

  • polyphony of voice as an editing technique

What type of audio editor am I?

  • I don’t know 

Analysis of Irv Tiebel’s The Altered Nixon Speech

  • Cut up/Truth/Fiction
  • “When you cut into the present, the future leaks out.”
  • hubristic

Maria Chavez: Live/Performance

James Benning: no editing at all

The Noisy-Nonself or I, the Thing in the Margins by Mark Peter Right

http://markpeterwright.net/the-noisynonself-or-i-the-thing-in-the-margins

Sonic Journalism

  • Based on the idea that all sound, including non-speech, gives information about places and events. This does not exclude speech but redresses the balance towards the relevance of other sounds. Listening provides valuable insights in different forms but is complementary to visual images and language.

https://sounds-from-dangerous-places.org/sonic-journalism/

  • who’s currently recording the war in Ukraine?
  • Syma Tariq: sonic journalism but with archive

Listening Protocol: Editing Interviews and Environments for Broadcast.

CISA #8 Performing as Research – some ideas

Performing Research

  1. Examination of artistic genres
  2. Definitions of performance
  3. The usage of documentary modes in performance
  4. Limits of performance

“Practice-based research: “involves a research project in which practice is a key method of inquiry. (…) It requires more labour and a broader range of skills to engage in a multi-mode research inquiry than more traditional research processes.”

Robin Nelson, Practice as Research in the Arts (London: Palgrave Macmillan, 2013) 8-9. 

> Specify a research inquiry at the outset.

> Set a timeline for the overall project inducing the various activities involved in a multi-mode inquiry

> Build moment of critical reflection into the timeline

> In documenting a process, capture moments of insight.

>Locate your praxis in a lineage of similar practice

Draws on “multiple fields and pieces together multiple practices to provide solutions to concrete problems”. 

Estelle Barret & Barbara Bolt, Practice as Research (London: I B Tauris, 2007) 12.

“Practice as research is experimental and materialist because it values responsiveness to context and recognieses agency in the material world, which matters because it means reseach is always acknowledged as a process of making and value is placed on the research process as well as the product”.

“This plural understanding of artistic research means that the production of artworks I have now seen as a legitimate function of the academy, thus blurring the boundary between academy and gallery artworld.”

Stephen Scrivener, Digital Research in the Arts and Humanities (London: Ashgate. 2010), 12.

Sometimes Making Something Leads to Nothing by Francis Alÿs (1997)

  • example of practice-based research.

“Performance cannot be saved, recorded, documented, or otherwise participate in the circulation of representations of presentations (…)”. 

Ever is over all by Pipilotti Rist

“Failure, by definition, takes us beyond assumptions and what we think we know. Artists have long turned their attention to the unrealizability of the quest for perfection, or the openendedness of experient, using both dissatisfaction and error as means to rethink how we understand our palce in the world”. 

Lisa Le Feuvre, Failure (London: Whitechapel, 2010) 12.

El ensayo by Francis Alÿs

This idea reminds me of the works of Lev Vygotsky and David Hume. The first for his zone of proximal development and the latter because of the a posteriori perspective on knowledge.

Hit Parade by Christof Migone

“The festure of ceding some or all authorial control is conventionally refardes as more egalitarian and democratic than the creaton of a work by a singular artists, while shared prodcution is also seen to entail the aesthetic benefits of greater risk and unpredicatability”.

Claire Bishop, Participation (London: Whitechapel, 2006), 12.

Drum grid by Raven Chacron

A Balloon for Linz by Davide Tidoni

https://shop.whitechapelgallery.org/collections/documents-of-contemporary-art?sort_by=manual – Whitechapel Books.

“The body begins with sound, in sound. The sound of the body is the sound of the other, but it is also the sound of the same… We resound together… every movement is, in fact, a vibration, and every vibration has a sound, however

Rie Nakajima live performance in IKON gallery

Lawrence Abu Hamdan: Contra-diction: speech against itself

OTO DATE by Akio Suzuki

“Acoustemology joins acoustic to epistemology to investigate sounding and listening as a knowing-in-action: a knowing with and knowing through the audible.” Stephen Feld

Holly Rumble by Hear a Pin Drop here