This paper introduces an innovative interdisciplinary approach that merges auditory and visual experiences through computational design. The project adeptly transforms sound into visual forms, effectively bridging a significant gap in sensory integration within creative contexts. Utilizing a comprehensive suite of software tools, including Rhinoceros 3D and Grasshopper along with its extensions, the algorithm proficiently captures live audio, processes it, and converts it into digital geometries. This transformative process not only visualizes sound but also demonstrates the immense potential of computational design in crafting tangible art forms. Preliminary tests have demonstrated promising outcomes in both real-time and pre-rendered audio geometries. This research highlights the importance of integrated sensory experiences and contributes significantly to the field of sound based tangible interaction.
Computational Design, Sound to forms, Geometry, Algorithms
In an era where the intersection of auditory and visual realms is increasingly prominent, This research presents a groundbreaking interdisciplinary approach, blending sound and visual experiences through advanced computational design. This paper delves into a novel venture — transforming auditory inputs into tangible visual forms, thereby pioneering a new dimension in sensory integration. At the core of this exploration is a sophisticated pipeline, incorporating a suite of high-end software tools such as Rhinoceros 3D, Grasshopper, and their various extensions (Tsai, Chen, Tsai, & Hung, 2010). This pipeline expertly captures live audio, processes it, and adeptly translates it into intricate digital geometries. This transformative methodology not only provides a visual representation of sound but also underscores the vast potential of computational design in producing tangible artistic expressions. The project, grounded in the principles of New Interfaces for Musical Expression (NIME) at OCAD University, stands as a testament to the symbiosis of music, computational design, digital visualization, and physical artifact creation. This project marks a significant stride in addressing the traditional segregation of sensory experiences, offering a novel perspective in the domains of creative exploration. This paper aims to present the conceptualization, development, and preliminary testing phases of this innovative approach, highlighting its implications and potential in reshaping the interaction with and understanding of the intertwined nature of sound and form.
The fusion of sound and visual art has long been a subject of fascination, yet the full potential of this interdisciplinary convergence has only begun to be explored in recent years (Grobman, Yezioro, & Capeluto, 2009). Historically, the segregation of sensory experiences — auditory, visual, and tactile — has been a prevalent norm in creative contexts. This traditional division has often limited the intuitive understanding and appreciation of the interconnectedness of these sensory modalities. However, with the advent of advanced computational tools and techniques, a new horizon has emerged, offering unprecedented opportunities for integration.
The field of computational design has seen a rapid evolution, transitioning from basic digital aids to sophisticated tools capable of intricate data processing and visualization (Ahn & …, 2014). Software like Rhinoceros 3D and Grasshopper has revolutionized the way designers and artists conceptualize and create. These tools have opened doors to algorithmic and parametric design (Castelo-Branco, Caetano, Pereira, & Leitão, 2022), allowing for the creation of forms and structures that were previously unimaginable. The ability to integrate external data, such as sound, into these design processes has further expanded the scope of what can be achieved.
In recent years, interdisciplinary research at the intersection of auditory and visual domains has gained significant momentum. Scholarly articles and case studies have begun to explore the potential of digital tools in fostering a more integrated sensory experience (Bertol, 1994). These studies have highlighted the benefits of such integration, particularly in enhancing educational methodologies and creative expressions. However, they have also pointed out a notable gap — the lack of solutions that facilitate real-time transmutation of sound into visual forms in an interactive and user-friendly manner.
NIME, as a field, has been instrumental in exploring the boundaries of musical expression through new technological interfaces (Jensenius, n.d.). It has encouraged the development of tools and platforms where sound is not just a means of auditory experience but a source for visual and physical creation (Levin & Lieberman, 2005). The work presented in this paper is deeply rooted in the principles and explorations of NIME, aiming to push the boundaries of how we perceive and interact with sound.
The increasing interest in multimodal sensory experiences reflects a broader cultural and educational shift. There is a growing recognition
of the value in integrating sensory modalities to enhance understanding and engagement (Nordin, Motte, Hopf, Bjärnemo, & Eckhardt, 2011). In creative domains, this integration offers a more holistic experience, enabling artists and audiences to perceive and interpret art in multidimensional ways. Thus, the background of the research is anchored in the evolving landscape of computational design, the interdisciplinary integration of sound and visual art, and the pioneering spirit of NIME. This project emerges as a response to the historical separation of sensory experiences, leveraging contemporary technological advancements to bridge this divide. This exploration is not just about creating a new tool or method, but about rethinking the way we interact with and understand the complex relationship between sound and form.
The proposed workflow aims to bridge this sensory gap through a meticulously designed pipeline that converts sound data into digital forms, which can manifest in the physical realm, thereby creating a tangible bridge between the audible and visual elements. By utilizing software tools such as Rhinoceros 3D, Grasshopper, Firefly, WeaverBird, Elefront, Heliotrope, Mosquito, Human, Bifocals and Moonlight. This pipeline captures live audio, processes the sound data, and transmutes it into visual geometry in real-time as well as in pre-rendering scenarios.
• Sound Capture: The initial stage involves capturing live audio using the Sound Capture component provided by Firefly.
• Data Recording and Processing: Subsequently, the sound data is recorded and streamlined for further manipulation.
• Geometry Creation: The processed data is employed to create visual geometry, with control over the formation parameters.
• Geometry Manipulation: Additional refinement and articulation of the geometry are achieved through various computational operations.
• Catmull Smoothness: The final geometry is polished and visualized, offering options for real-time interaction and exploration.
• Baking Geometry: The final geometry is baked with all its attributes and properties captured; this geometry can then be further utilized in manufacturing processes.
• Sorting and Preparation for 3D Printing: Cleanup and sorting of the geometry, followed by orienting the baked geometry for 3D printing.
To bring this project to fruition, several technical resources and software tools are indispensable. The outlined pipeline primarily leverages the capabilities of Rhinoceros 3D and Grasshopper 3D to process and translate the auditory data into visual geometric formations.
• Rhinoceros 3D: A primary software for creating 3D forms.
• Grasshopper 3D: An integral parametric design plugin within Rhinoceros, utilized for algorithmic design.
• Firefly: An extension within Grasshopper 3D, crucial for connecting external data inputs like live audio.
• WeaverBird: A plugin for smoothing meshes and creating subdivision schemes in Grasshopper 3D. Used for Catmull clark smoothness.
• Elefront: A plugin for managing and organizing data, and creating baked geometry in Rhino and Grasshopper. Used for controlled baking geometry.
• Heliotrope: A lightweight solar geometry tool for Rhino/Grasshopper that generates solar vectors, manipulates astronomical dates, and casts shadow silhouettes. Used for day and time extraction for layers.
• Mosquito: A plugin suite for tapping into social, financial, and popular media, extracting location, buildings, roads, OpenStreetMap data, profiles, images, and messages from platforms like Facebook and Twitter. Used for pre-rendered audio input.
• Human: An extension that enhances Grasshopper’s ability to create and reference geometry including lights, blocks, and text objects, and also enables access to information about the active Rhino document pertaining to materials, layers, linetypes, etc. Used for geometry baking and other critical functions.
• Bifocals: A plugin that labels all Grasshopper components with their full names as they are placed on the canvas, aiding in documentation.
• Moonlight: A single component that toggles Grasshopper’s GUI between Light Mode and Dark Mode to reduce eye strain
• Microphone: For real-time audio capture.
• Computational Resources: Adequate processing power and memory to handle real-time data processing and visualization.
• Familiarity with parametric and computational design principles.
• Familiarity with audio production tools and its design principles.
• Proficiency in utilizing the mentioned software tools and handling audio-visual data.
• Connecting the extensions in a way that fits the workflow
The integration of these software and hardware components forms a seamless pipeline. Live audio captured through the microphone is fed into Firefly, where it is processed and converted into data streams. This data is then algorithmically manipulated in Grasshopper 3D, using various plugins like WeaverBird and Elefront, to create and refine digital geometries. Rhinoceros 3D provides the environment for visualizing these geometries. The entire workflow is designed to be intuitive, allowing for creative exploration and tangible outputs from auditory inputs.
Below is the system diagram showing the relationship between various components in the workflow.
Fig 1 - System Diagram of workflow
Evolution of Computational Design Tools in Artistic Expression "Acoustic Morphologies: Sound-based Form Generation through Computation" marks a significant step forward in the interdisciplinary field of sound visualization and computational design. This research has successfully demonstrated the feasibility of transforming live audio into visual geometries, using a suite of advanced software tools and hardware. The process not only visualizes sound but also paves the way for new methods of creating tangible art forms, bridging a significant gap in sensory integration within creative contexts.
The project, grounded in the principles of New Interfaces for Musical Expression (NIME) and conducted at OCAD University, has shown promising outcomes in both real-time and pre-rendered audio visualizations. It has not only offered a novel perspective in the domains of education and creative exploration but also contributed significantly to the field of sound-based tangible interaction.
• Interdisciplinary Integration: The project underscores the potential of integrating auditory and visual domains through computational design, encouraging a more holistic approach to sensory experiences.
• Technological Synergy: The effective use of Rhinoceros 3D, Grasshopper, and other plugins demonstrates the power of combining multiple software tools to create complex, interactive designs.
• Educational and Creative Implications: This research opens new avenues in both educational and creative fields, enabling a deeper understanding and appreciation of the interconnectedness of sound and form.
• Future Directions: The promising results of this proof of concept pave the way for further exploration and development. The project has the potential to expand into more complex applications, including interactive installations, educational tools, and innovative art exhibits.
• Challenges and Opportunities: While the project has been successful, it also faces challenges such as ensuring accessibility, refining user interfaces, and managing computational resources. These challenges present opportunities for future research and development.
In conclusion, this project is not just about creating a new tool or method, but about rethinking the way we interact with and understand the complex relationship between sound and form. It contributes to a growing body of work that seeks to unify different sensory experiences, thereby enriching the perception and interaction with the world around us. As technology continues to evolve, so will the possibilities for these kinds of interdisciplinary explorations, leading to more integrated and immersive experiences in both art and education.
[1] S. Ahn and .... A Case Study on Application of Parametric Designing to Industrial Design. Journal of Digital Design, 14 (2014), 337-346. Available at: https://doi.org/10.17280/jdd.2014.14.3.034.
[2] D. Bertol. Form Generation and Evolution. Available at: https://doi.org/10.1007/978-1-4757-6946-3_3.
[3] Bifocals. Food4Rhino (February 22, 2016). Available at: https://www.food4rhino.com/en/app/bifocals.
[4] R. Castelo-Branco, I. Caetano, I. Pereira, and A. Leitão. Sketching Algorithmic Design. Journal of Architectural Engineering, 28 (2022). Available at: https://doi.org/10.1061/(ASCE)AE.1943-5568.0000539.
[5] J. Grobman, A. Yezioro, and G. Capeluto. Computer-Based Form Generation in Architectural Design — A Critical Review. International Journal of Architectural Computing, 7 (2009), 535-554. Available at: https://doi.org/10.1260/1478-0771.7.4.535.
[6] Heliotrope – Solar. Food4Rhino (October 7, 2011). Available at: https://www.food4rhino.com/en/app/heliotrope-solar.
[7] A. R. Jensenius. Kinectofon: Performing with Shapes in Planes. Available at: https://www.duo.uio.no/handle/10852/35776.
[8] G. Levin and Z. Lieberman. Sounds from Shapes: Audiovisual Performance with Hand Silhouette Contours in The Manual Input Sessions (2005). Available at: https://zenodo.org/records/1176772
[9] Moonlight. Food4Rhino (August 26, 2018). Available at: https://www.food4rhino.com/en/app/moonlight.
[10] A. Nordin, D. Motte, A. Hopf, R. Bjärnemo, and C.-C. Eckhardt. Complex Product Form Generation in Industrial Design: A Bookshelf Based on Voronoi Diagrams, in: Design Computing and Cognition ’10 (2011), 701-720. Available at: Cognition ’10 (2011), 701-720. Available at: Cognition ’10 (2011), 701-720. Available at:
[11] H.C. Tsai, T. Chen, H. Tsai, and F. Hung. Computer-Aided Form Generation for Product Design. Advanced Materials Research, 97-101 (2010), 3785-3788. Available at: https://doi.org/10.4028/www.scientific.net/AMR.97-101.3785.
Github link for the Codes: https://github.com/calluxpore/Acoustic-Morphologies-Sound-based-Form-Generation-Through-Computation.git
Performance Video Link: Acoustic Morphologies: Sound-based Form Generation Through Computation (Video Demo) on Vimeo