Anders R. Bargum

PhD Student, Aalborg University Copenhagen and Heka VR.

anders.jpg

Aalborg University

A.C. Meyers Vænge 15

DK-2450 Copenhagen SV, Denmark

A research and development oriented PhD student in the field of audio processing, speech represenatation learning, deep learning and voice synthesis. I am affiliated with the Multisensory Experience Lab at Aalborg University in Copenhagen and actively collaborating with the industrial partner Heka VR. Project use-cases ranges everything in between virtual reality, audio analysis and creative music production.

I am currently working on alternative deep learning methods and models for real-time voice conversion in virtual therapeutic scenarios (AVATAR Therapy). Within the field of audio AI, I have worked on, developed, and trained models across a wide range of topics, including speech verification, differentiable DSP, and neural audio codecs. I also have extensive experience exporting these models for real-time use, for example via Hugging Face or C++/JUCE-hosted TorchScript models.

I have worked and been an intern at Native Instruments in Berlin and Neutone AI in Tokyo. I have additionally been hosting several workshops and supervised groups on the Medialogy and Sound and Music Computing educations at Aalborg University.

I am always open to collaboration, new insights or general talk on audio, speech synthesis and AI. You can reach me at arba@create.aau.dk.

selected publications

  1. Frontiers
    Reimagining Speech: A Scoping Review of Deep Learning-based Methods for Non-parallel Voice Conversion
    Anders R. Bargum, Stefania Serafin, and Cumhur Erkut
    Frontiers in signal processing, 2024
  2. APSIPA
    Unified Timbre Transfer: A Compact Model for Real-Time Multi-Instrument Sound Morphing
    Anders R. Bargum, Naotake Masuda, Bogdan Teleaga, and 2 more authors
    In Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2025