Shanghai Conservatory of Music
Researchers from Northwestern Polytechnical University and Shanghai Conservatory of Music introduce SongEval, the first open-source, large-scale benchmark dataset for multi-dimensional aesthetic evaluation of full-length AI-generated songs. Models trained on SongEval accurately predict human-perceived musical quality, outperforming existing objective metrics.
GVMGen is introduced as a general model for generating multi-style, multi-track waveform music from video, employing hierarchical attention mechanisms for implicit visual-musical feature alignment. The model achieves superior music-video correspondence and generative diversity compared to existing methods, supported by a novel objective evaluation framework and a new large-scale dataset including Chinese traditional music.
This paper proposes an expressive singing voice synthesis system by introducing explicit vibrato modeling and latent energy representation. Vibrato is essential to the naturalness of synthesized sound, due to the inherent characteristics of human singing. Hence, a deep learning-based vibrato model is introduced in this paper to control the vibrato's likeliness, rate, depth and phase in singing, where the vibrato likeliness represents the existence probability of vibrato and it would help improve the singing voice's naturalness. Actually, there is no annotated label about vibrato likeliness in existing singing corpus. We adopt a novel vibrato likeliness labeling method to label the vibrato likeliness automatically. Meanwhile, the power spectrogram of audio contains rich information that can improve the expressiveness of singing. An autoencoder-based latent energy bottleneck feature is proposed for expressive singing voice synthesis. Experimental results on the open dataset NUS48E show that both the vibrato modeling and the latent energy representation could significantly improve the expressiveness of singing voice. The audio samples are shown in the demo website.
Researchers from Tongji University and Shanghai Conservatory of Music present a comprehensive review of intelligent music generation systems, analyzing various algorithms, music representations, and evaluation methods. The work uniquely conducts a comparative analysis of Eastern and Western research, pinpointing cultural differences and resource limitations in datasets and evaluation practices.
Vocal education in the music field is difficult to quantify due to the individual differences in singers' voices and the different quantitative criteria of singing techniques. Deep learning has great potential to be applied in music education due to its efficiency to handle complex data and perform quantitative analysis. However, accurate evaluations with limited samples over rare vocal types, such as Mezzo-soprano, requires extensive well-annotated data support using deep learning models. In order to attain the objective, we perform transfer learning by employing deep learning models pre-trained on the ImageNet and Urbansound8k datasets for the improvement on the precision of vocal technique evaluation. Furthermore, we tackle the problem of the lack of samples by constructing a dedicated dataset, the Mezzo-soprano Vocal Set (MVS), for vocal technique assessment. Our experimental results indicate that transfer learning increases the overall accuracy (OAcc) of all models by an average of 8.3%, with the highest accuracy at 94.2%. We not only provide a novel approach to evaluating Mezzo-soprano vocal techniques but also introduce a new quantitative assessment method for music education.
There are no more papers matching your filters at the moment.