Sinsy
Developer(s) | Nagoya Institute of Technology |
---|---|
Preview release |
0.92
/ December 25, 2015 |
Development status | Active |
Operating system | Linux |
Available in | Japanese, English, Chinese |
Type | Vocal Synthesizer Application |
License | Modified BSD license |
Website |
www |
Sinsy (Singing Voice Synthesis System) (しぃんしぃ) is an online Hidden Markov model (HMM)-based singing voice synthesis system by the Nagoya Institute of Technology that was created under the Modified BSD license.
Overview
The online demonstrator is free to use, but will only generate tracks up to 5 minutes. The user uploads data in the MusicXML format, which the Sinsy website reads to output a WAV file of the generated voice. Gender factor, vibrato intensity, and pitch shift can be adjusted prior to output.[1]
As of December 25, 2015 the official creators of the Sinsy were Keiichi Tokuda (Producer and designer), Keiichiro Oura (Design and Development), Kazuhiro Nakamura (Development and Main Maintainer), and Yoshihiko Nankaku.[2]
Currently it only supports Japanese and English, though Mandarin is in development.[3][4]
Products
- Yoko (謡子), Japanese female vocal.
- Xiang-Ling (香鈴), Japanese female vocal,an English vocal was added Christmas, 2015. Mandarin was added also to her language capabilities.
- Matsuo-P (松尾P), English masculine vocal.
- Namine Ritsu S (波音リツS), a Japanese masculine vocal. Originally produced for UTAU, it released on December 25, 2013. Despite being a masculine vocal, it has a female voice provider.
References
- ↑ Hentai (2012-12-27). "Sinsy Updates to 3.3 & Releases English Demo". Engloids.Info. Retrieved 2015-06-04.
- ↑ "The HMM-Based Singing Voice Synthesis System "Sinsy" version 0.92". Sinsy.sourceforge.net. Retrieved 2016-01-28.
- ↑ ITmedia ニュース - 初音ミクとも簡単に対話できる「MMDAgent」、その詳細を聞いてきた. Retrieved November 23, 2013
- ↑ Nakamura, K.; Oura, K.; Nankaku, Y.; Tokuda, K. (May 2014). "HMM-Based singing voice synthesis and its application to Japanese and English". 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP): 265–269. doi:10.1109/ICASSP.2014.6853599.