I am currently working on a new project, Lullabyte, which wants to explore the effects of music on sleep, says Dr. Sergi Jordà

Dr. Sergi Jordà is a Catalan innovator, installation artist, digital musician, and Associate Professor at the Music Technology Group, Universitat Pompeu Fabra (Barcelona, Spain). He is best known for directing the team that invented the Reactable.

Dr. Sergi Jordà holds a B.S. in Fundamental Physics and a PhD in Computer Science and Digital Communication. During his undergraduate years in the 1980s, he discovered computer programming and decided to fully devote himself to live computer music. He is a senior researcher at the Music Technology Group of Universitat Pompeu Fabra in Barcelona, where he directs the Music & Advanced Interaction (MAIn) team.

He teaches courses in Computer Music, Audio Signal Processing, Human-Computer Interaction (HCI), and Interactive Media.

He told us about his current research interests based on the interaction of AI and human creativity and explained some of the possibilities of AI in music creation and its potential dangers. He also shared his new project, Lullabyte, which may answer the question of how sounds synchronized with brain waves can improve certain properties of sleep.

Interview: Irina Rybalchenko for El Periodic news

What exactly do you do in the Music Technology Group at Universitat Pompeu Fabra?

The Music Technology Group (MTG) is a research group, one of the most important in Europe, specializing in audio signal processing, music information retrieval, musical interfaces, and computational musicology. It is a part of the Department of Information and Communication Technologies of the Universitat Pompeu Fabra in Barcelona.

The group has existed since 1995. We are 3 professors, and many researchers, PostDoc & PhDstudents (about 50 people). We work in many areas related to music technology, and we cooperate with companies all over the world. Personally, I have been working in the field of digital musical instrument design for many years. But now I work mainly with AI.

What are you teaching in the framework of computer music courses?

We give a master’s degree in music research and a master’s in Sound and Music Computing with students from all over the world, [including] Russia, India, China, America, and Europe.

We teach music search, music analysis, and topics related to the latest modern technologies in the field of music creation. Our students tend to be technicians and musicians at the same time.Our graduates can get a PhDs or start working in companies such as Spotify and SoundCloud; for example, they can become manufacturers of synthesizers.

How do you use AI for creating music?

We don’t use AI only to create music specifically, but the truth is that, in all modern technologies we use, AI is becoming the main tool. Since 2015, AI has become the way to do many things. This is a very important global trend. In the specific field of music generation, many AI techniques are some kind of attempt to substitute humans. So it’s very dangerous. We’re doing AI research for music creation, but it’s always about enhancing human creativity, not replacement.

Creating music using AI without human intervention can become an important trend.

AI is able to create mainstream music. There are improvements every week. So, AI can make music better than an average composer. It’s very cheap. And no one cares how music was created – by human beings or by machines. So it can be bad for the musicians because they’re going to have trouble surviving. And it’s also bad for music because it’s getting more and more commercial.

That’s why we need to distinguish between the use of AI to assist human creativity, which can bring new qualities, and the use of AI to replace human creativity. We are interested in using AI specifically to help human creators.

What are the main objectives and goals of The Freesound Project? What is the history of the project, and what is it about?

Freesound is a hierarchical collection of sound classes with currently more than 500,000 audio samples. The Freesound project started in 2005 and has become a very large project with hundreds of thousands of registered users. This is a sound database that allows you to upload, download recordings and share them with other people. We also use it to develop, recognize and classify sounds. Many filmmakers use Freesound for their films. It can be used without payment, and the database is supported by user donations.

Your doctoral dissertation, The Digital Lutherie, established the foundations of new forms of digital music based on visual response and spatial multiplexing. Could you explain the details to us, please?

I don’t know how I can summarize this. I started playing music in the mid-eighties; I studied physics in Barcelona and played the saxophone. I played free jazz. I wasn’t a good saxophonist because I didn’t like practicing.

One day at university, I discovered computer programming. Gradually I realized that computer programming could be used to create music and that it could use computers for the parts in music creation that I found “boring” or repetitive. Since then, my main goal has been trying to use computers to make music in realtime.

My 2005 PHD summarizes, in that sense, all my knowledge on these topics by that time. It helped me summarize and clarify many ideas, streamline my thoughts, and formalize all my knowledge. And eventually, this led to the development of an instrument that became very popular. This is a Reactable- a tabletop musical instrument.

What are its technical features? How did Bjork decide to use a Reactable during her 2007-08 world tour Volta? Did you get any feedback from her?

In 2006, my colleagues and I uploaded several short videos on YouTube (this was the beginning of YouTube, 2006). And these videos became very popular. Within a few days, they gained millions of views. One of the interested people was Bjork. She contacted us saying she was starting a world tour and wanted to use Reactable. We brought it to Paris and showed it to her. She liked it, and we made our first prototype for her.

The Reactable was shown in its concert close-up on big screens, so it became very popular. We started making Reactables for other musicians, as well as for museums, studios…

We created the company in 2008; it lasted until 2018. One of the goals of Reactable was to cover all aspects of instruments. The instrument had high expressiveness; it could be as complex as a professional musician wanted. On the other hand, it could be so simple that it suited everyone, even kids.

There have been a lot of attempts to find new forms of musical styles during human history. For example, a form that will be based not on a classical 7-note system like all modern music but more oriented towards microtonality. These experiments are still actively continued by many enthusiasts. Do you participate in such experiments?

I have always loved improvisation in music. Through music, I develop myself. I don’t think I have a particular style, but I am very linked to experimental music. Microtonality, noises, and sounds are acceptable in my music. Microtonality is not new at all. Formalizing microtonality does not seem important to me. More important is the idea that everything can be music. Microtonality can be just one such case.

To be honest, I don’t have much free time to make music lately. The last time I played live was over 10 years ago.

Have you heard about Russian scientist Sergey Petoukhov´s work “Matrix Genetics,” who analyzed the DNA code by a matrix as well as other mathematical methods and discovered a possibility to consolidate the classical Pythagoras musical system with a new “pentagram” system based on Fibonacchi numbers? What do you think about that?

For me, this is something not so relevant. The semi-note system is used in many commercial pieces of music in the Western world. But millions of musical compositions are outside this system. Commercial music is dominated by 7 notes, yet many cultures do not use this system.

Why do you think the symbiosis between science and music is happening at the moment?

I think this has always happened. Many theories of music have been associated with world harmony and cosmology. The industrial revolution brought the sound of the piano. Music has always been associated with science and technology. Music itself has always used the latest technology; some of it was not created to create music.

With the electricity era, music developed very soon. I don’t think this trend is new. It’s about 3000 years old.

Could you tell us about your collaboration with Catalan artists?

In the 90s, I collaborated with Catalan artists doing media art. One of them was Marcel•lí Antúnez. I have also collaborated with La Fura dels Baus, a large theatre company in Barcelona that has toured all over the world.

Now we are collaborating with Raul Refree, who was the producer of the first Rosalia´s album. We are working on a concert where Raul will be accompanied by an AI system.

How do you plan to develop further? What are your plans for the future?

I am currently working on a new project, Lullabyte, which wants to explore the effects of music on sleep. This is a big European project; we are one of the specialists among speep scientists, neirologisrs, clinics… There are 10 centers all around Europe from Sweden, Denmark, Germany, Holland, and France. It will take at least 3 years. Our plan is to analyze brain waves and produce sounds that are synchronized with brain waves. We want to study how these sounds can improve certain properties of sleep. It can help us sleep deeper, fall asleep faster, and not interrupt sleep.

Read more: Modern science and engineering ...