FUTUREDAYS

FUTUREDAYS portrait

FUTUREDAYS is an art collective founded by Shin Joonsik and Kim Inhyun that explores the convergence of art and technology through digital media. Building on early experiments with VR painting and collaborations across painting, music, dance, and IT, their practice constructs environments where the virtual and the real coexist. FUTUREDAYS’ works invite audiences to interact with and become part of the artwork, expanding the boundaries of artistic expression and imagination. In doing so, FUTUREDAYS realizes artistic realities of the metaverse and investigates the creative possibilities of the “hyper” era characterized by hyper-reality, hyper-intelligence, and hyper-connectivity. Employing technologies such as VR, AR, and MR, along with AI-based hyperrealism and real-time interaction, FUTUREDAYS transforms space into an experiential medium and continually experiments with new forms of digital convergence and sensory language.

Artworks
  • A Place Called You, 2024, VR headset (HMD), spatial audio, generative AI-based interactive media, real-time 3D graphics, immersive sound art design, generative AI actors, pre-recorded opera vocals and AI-synthesized vocals, Unity-based real-time rendering environment, dry ice machine, screen, lighting, immersive sound installation, 10 min.
    A Place Called You, 2024, VR headset (HMD), spatial audio, generative AI-based interactive media, real-time 3D graphics, immersive sound art design, generative AI actors, pre-recorded opera vocals and AI-synthesized vocals, Unity-based real-time rendering environment, dry ice machine, screen, lighting, immersive sound installation, 10 min.
    A Place Called You is an immersive intermedia opera that deconstructs and reconfigures the audiovisual language of traditional opera through the use of VR and AI technologies. Moving away from the stage- and performer-centered format, FUTUREDAYS constructs a “sensory narrative” grounded in auditory perception, inviting the audience to engage emotionally and perceptually with the flow of sound. This marks a shift from an opera that is watched to one that is heard and felt, expanding emotional resonance into a spatial experience. By replacing the conventional theater with a digital environment, the work creates an open field where sound, technology, and bodily perception intersect.A Place Called You unfolds from “HER,” one of the episodes in FUTUREDAYS’ metaverse opera series The Day I Chose to Be Me – ME, YOU, HIM, HER. The central figure, HER, reinterprets Micaëla from the classic opera Carmen, transforming her from a peripheral and passive character into a new subject of emotion and memory. Through AI-synthesized vocals and generative sound, HER speaks in her own voice, emerging as an autonomous being within a technological system that perceives gaze, movement, and speech. As viewers traverse HER’s inner landscape, they become coexistent subjects within the operatic space, experiencing both the emotional pulse and spatial resonance of her world.The work marks a pivotal moment in digital art by positioning technology not merely as a medium but as a narrative agent in its own right. FUTUREDAYS combines AI-synthesized vocals, generative sound systems, real-time 3D graphics, and interactive structures that respond to the audience’s movement and gaze to construct a fluid narrative in which technology continuously reorganizes the structure of music and the rhythm of emotion in real time. Functioning simultaneously as a conduit for emotion and as a stage itself, this system generates a unique sensory experience each time, unfolding an operatic scene where human perception and nonhuman algorithms resonate together.
    • diagram28.png
Artist Responses
What media and technical components make up this work?This work is an intermedia opera based on a VR HMD, integrating hardware and software in a tightly interwoven form. The hardware relies on Meta Quest 3 or higher-compatible devices, while the software operates within Unity 2022.3, implementing real-time 3D graphics and spatial audio (binaural). Generative AI performers, employing LLM-based voice and motion synthesis, are combined with pre-recorded operatic vocals and AI-synthesized singing to create a heightened sense of immersion. Interaction is enabled through voice recognition, gesture tracking, and eye-tracking, and the work is executed as a Meta Quest–exclusive .apk application.Is conversion into other formats possible?Yes.Are there plans to migrate or emulate this work in response to future technological changes?Yes.What should be prioritized to ensure the long-term care of this work?For the long-term preservation of this work, regular updates to the Unity LTS version and Meta SDK compatibility are required. The original executable file, logs, and dialogue/response databases (DB) should be backed up redundantly on both Google Drive and NAS. The HMD device does not need to be replaced as long as it remains in normal operation, but consumable components such as the battery, sensors, and lenses may need to be replaced or supplemented when aging issues occur.What do you consider the essential element that must be sustained, even if the medium changes?The most essential value of this work lies in the audience’s experience of an emotional site within an auditory space, and in the immersive narrative shaped by generative AI performers and spatial audio. Therefore, even if the hardware or software changes over time, the identity of the work resides in its auditory spatiality and narrative structure, and these must be preserved without compromise.Does the work depend on specific technical forms or hardware?This work is optimized for the Meta Quest series of HMDs and was developed on a Unity-based platform. It is not dependent on obsolete hardware such as CRT monitors, but the realization of the work requires a VR HMD and a spatial audio environment.What are the most important environmental conditions for installation (such as space, lighting, or sound)?For installation, each viewer requires a minimum of 3m² of space. A three-dimensional immersive sound environment is essential, and the space should ideally be isolated from external noise. A dark lighting condition is most effective, and the use of dry ice or fog machines can further enhance the sense of visual immersion.To what extent can the work be reinstalled in the future without the direct involvement of the artist?The work can be installed and operated based on the provided manual kit. Tasks such as HMD setup, file installation, and spatial and audio configuration can be reproduced at any time by exhibition technicians following the manual. However, the interpretive and directorial aspects of the work may still require consultation with the artist.Can the current form of the work be maintained in the future?Even if the hardware used in this work (Meta Quest 3) becomes obsolete, it can be fully migrated to next-generation HMD SDK environments. Since the core of the audience’s experience lies in the generative AI–based three-dimensional spatial structure, changes in hardware will not affect the identity of the work. For example, while the current showcase is based on VR devices, the work could be expanded in the future to MR headsets such as Apple Vision Pro, where the integration of fog, lighting effects, and AI NPCs would continue to preserve the integrity of its identity.If specific equipment or components fail, can they be replaced?Devices such as HMDs, dry ice machines, and fog machines can be substituted with alternatives available on the market. The Unity-based software can be reinstalled using backup files (.apk/.unitypackage), ensuring that replacement and restoration are possible even if specific equipment fails.Are there any particular points you would like to emphasize or considerations to note regarding the preservation of the work?As the work operates on server- or local API–based systems, it is essential to maintain detailed records of model versions and update histories. It is equally important to ensure that the preservation management system is sustained over time.