-
Notifications
You must be signed in to change notification settings - Fork 61
/
Copy pathReadme
27 lines (23 loc) · 1.54 KB
/
Readme
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#requirements
python 2.7 and cuda 8.0.
gensim
pretty_midi
py_midi
tensorflow-gpu 1.14.0
jupyter
matplotlib
scikit-learn
# cond-LSTM-GAN
Reference: Yi Yu, Abhishek Srivastava, Simon Canales, "Conditional LSTM-GAN for Melody Generation from Lyrics": https://arxiv.org/abs/1908.05551
This project contains a pipeline to generate melodies with given lyrics. After pre-processing the data, the model is trained with the pre-processed data and some validated models are saved. Then, the user can create new melody through a Jupiter notebook.
## Folders
- *Data*: a folder containing the raw file-by-file dataset to be preprocessed (songs_word_level) and a folder which stores the matrix-shaped dataset after pre-processing
- *enc_models*: a folder containing the word2Vec and syllable2Vec model trained on our lyrics dataset
- *saved_gan_models*: folder in which are stored the trained model for Lyrics2Melody.
- *settings*: folder containing the settings files to be given as arguments for the training.
### Python files
- *lstm-gan-lyrics2melody*: Main model. Original code by Olof Mogren, modified by yylab1. To run (requires CUDA >= 8.0 and matching tensorflow-gpu version): python lstm-gan-lyrics2melody.py --settings_file settings
-*0.ipynb*: create the dataset matrices using the raw data present in ./data/songs_word_level.
-*3.ipynb*: generate triplets of music attributes for lyrics in the testing set.
-*4.ipynb*: generate a midi file for a given lyrics.
- Others: utilities function and MIDI processing tools (some by Olof Mogren)