Neural Network Learns to Generate Voice (RNN/LSTM)

[VOLUME WARNING] This is what happens when you throw raw audio (which happens to be a cute voice) into a neural network and then tell it to spit out what it’s learned.

This is a recurrent neural network (LSTM type) with 3 layers of 680 neurons each, trying to find patterns in audio and reproduce them as well as it can. It’s not a particularly big network considering the complexity and size of the data, mostly due to computing constraints, which makes me even more impressed with what it managed to do.

The audio that the network was learning from is voice actress Kanematsu Yuka voicing Hinata from Pure Pure. I used 11025 Hz, 8-bit audio because sound files get big quickly, at least compared to text files – 10 minutes already runs to 6.29MB, while that much plain text would take weeks or months for a human to read.

UPDATE: By popular demand, I have uploaded a video where I did this with male English voice, too: https://www.youtube.com/watch?v=NG-LATBZNBs

I was using the program “torch-rnn” (https://github.com/jcjohnson/torch-rnn/), which is actually designed to learn from and generate plain text. I wrote a program that converts any data into UTF-8 text and vice-versa, and to my excitement, torch-rnn happily processed that text as if there was nothing unusual. I did this because I don’t know where to begin coding my own neural network program, but this workaround has some annoying restraints. E.g. torch-rnn doesn’t like to output more than about 300KB of data, hence all generated sounds being only ~27 seconds long.

It took roughly 29 hours to train the network to ~35 epochs (74,000 iterations) and over 12 hours to generate the samples (output audio). These times are quite approximate as the same server was both training and sampling (from past network “checkpoints”) at the same time, which slowed it down. Huge thanks go to Melan for letting me use his server for this fun project! Let’s try a bigger network next time, if you can stand waiting an hour for 27 seconds of potentially-useless audio. xD

I feel that my target audience couldn’t possibly get any smaller than it is right now…

EDIT: I have put some graphs of the training and validation losses on my blog for those who have asked what the losses were!
http://robbi-985.homeip.net/blog/?p=1760#settings

EDIT 2: I have been asked several times about my binary-to-UTF-8 program. The program basically substitutes any raw byte value for a valid UTF-8 encoding of a character. So after conversion, there’ll be a maximum of 256 unique UTF-8 characters. I threw the program together in VB6, so it will only run on Windows. However, I rewrote all the important code in a C++-like pseudocode:
http://robbi-985.homeip.net/information/bintoutf8_pseudo.txt
Also, here is an English explanation of how my binary-to-UTF-8 program works:
http://robbi-985.homeip.net/information/bintoutf8_info.txt

The HackerAttitude Discussions