top of page

tRoshanAudio


This was a small part of an experiment i did to try build a completely generative music system in pure data and to explore how this could be used to make the generated music be interactive in the game. When i started my research on how to incorporate. Pure data into Unity, the options i had were hvcc(enzien audio), libpd and kalimba. I was more drawn towards libpd in the initial stage as it would allow me to run both the pure data patch and unity scene together forming a feedback development cycle. But as it turned out unity was not supporting libpd anymore and kalimba seemed to have a bit more learning curve than what i planned to invest into this project.


So the ideal candidate seemed to hvcc from Enzien Audio. Although enzien audio officially shut down in 2017, they had put up their compiler in GitHub for anyone to use. Even though it had very limited functionality regarding the pure data objects it supported, i was optimistic. I was doing a Pure Data Implementation course at School For Video Game Audio at the same time and Leonard Paul helped me a ton trying to work out the kinks in hvcc to try to bring to life what i had in my head.


My initial idea was to modify some of Martin Brinkmann's brilliant pure data patches to work with hvcc. But as it turned out this process almost broke me as i was getting random compilation errors and even after clean compilations i was plagued by crashes in Unity which led me examine the patches i was trying to modify inch by inch and pin down the problematic parts.


After the first few weeks of throwing various experiments at the compiler, I started to build my own sequencer and a 16 note table as an abstraction to store the notes which will be used to play the music piece which gave me enough confidence to finally start getting results from this whole thing.

The demo I have so far does not incorporate interactivity into the game as i had initially thought. What I ended up implementing was some basic volume and filter automation according to game parameters that i had defined in Unity.



If you're some one who has tried to use sound samples in pure data to program sequences, you would have seen how fast it can become complicated as the number of samples increases. The key to having detailed and varied sounds is having more layers to work with and more samples per layer so that it doesn't sound repetitive. As i was trying to make a footstep patch i ran into this problem. Which is what forced me to make this abstraction for pure data.


You can download the pure data patch from here.


Coming from the world of Wwise, I knew how useful a random container was. So i wanted my abstraction to be as simple as it could be to use in pure data.


The format of the abstraction would be,




The arguments for this abstraction are as follows,


The names of all samples for a random container has to be formatted like the picture below. I used a python script to automate this step which would take the name of the folder and rename all the files in the folder with that name and an increment.





A pure data patch designed to emulate an impulse gun with loading sound and a firing mechanism. The loading riser was made with a modified Shepard's tone patch from the pure data tutorials to make the base pitch modulatable. The samples for the gun were loaded and played back using a custom random container that I made in pure data. The Shepards tone abstraction was also used to add a layer to the gunshot to have a more procedural way of making futuristic-sounding gunshot.

1
2
bottom of page