Python/tensorflow implementation of Neural Style Transfer
Table of contents
- Installation
- Running
- Pre-trained Data
- Architecture Documentation
- Use
- Standard Run
- More Examples
- Timings
- Additional Documentation
The following are required to run the application in secretshare.jar:
- python 3 (3.10.6)
- module "venv" if you use venvs
- python3 -m venv py3ml
- . ./py3ml/bin/activate
- pip install --upgrade pip
- pip install -r requirements.txt
TODO - install NVidia CUDA, update install instructions.
Note: not all the packages in requirements.txt are used by this project. Instead, the requirements.txt is a superset of "machine learning packages" used by various machine-learning projects.
- Activate the python virtual environment, then
(py3ml) $ ./nst-standalone.py
[creates directory outputs/output0051/*]
The default configuration takes about 2 minutes to complete (on relatively recent CPU, with no GPU/CUDA extensions installed).
The pre-trained vgg19 weights .h5 file is in this repository as an 77 MB file.
It is also available from kaggledataset. To get it, you have to download the entire 1 GB download, though.
This is used by tf.keras.applications.VGG19, see tensorflowVGG19. and kerasvgg19.
See tensorflow tutorial at https://www.tensorflow.org/tutorials/generative/style_transfer
See nst-standalone.py, at the top, variable 'inputJson' to find the configuration parameters that are used each run. In order of importance aka the ones that you are most likely to change:
- content_image_filename --content
- style_image_filename --style
- epochs --epochs
- save_epoch_every and print_epoch_every --saveEveryEpoch
Actual neural net parameters:
- adam_learning_rate --learningRate
- alpha (content) and beta (style) weights --alpha and --beta
- style_layers
- random generator seed --seed
The standard run uses the louvre image for content and the monet image for style, and runs 250 epochs to create the output image:
Type | |
---|---|
Content | ![]() |
Style | ![]() |
Output | ![]() |
(this is Output at 2500 epochs) | ![]() |
See ml-style-transfer-samples for a more comprehensive list of example input.json files, and different style images and different content (base) images.
seconds - from Python "finished" output
seconds elapsed - wall-clock or "time" output
-
Intel i5-12600 6 cores, 12 vcores, 3.3 GHz
- Windows Subsystem Linux, Ubuntu 22.04
- Standard (250 epoch, louvre, monet) - 138 seconds (180 seconds elapsed)
- Windows 11, cmd shell
- Standard (250 epoch, louvre, monet) - 201 seconds (210 seconds elapsed)
- Windows 11, miniconda, CUDA 11.2, cudnn 8.1.0, RTX 3080Ti GPU
- Standard (250 epoch, louvre, monet) - 12 seconds (16 seconds elapsed)
- Windows Subsystem Linux, Ubuntu 22.04, tensorflow 2.12.0, cudatoolkit 11.8.0, RTX 3080Ti GPU
- Standard (250 epoch, louvre, monet) - 9 seconds (11 seconds elapsed)
- Windows Subsystem Linux, Ubuntu 22.04
-
Intel Xeon E5-2640 12 cores, 2.5Ghz, 64 GB RAM, CentOS, VMworkstation
- Virtual Machine, Ubuntu 22.04, 4 cores, 8 GB RAM
- Standard (250 epoch, louvre, monet) - 1010 seconds (1050 seconds elapsed)
- Virtual Machine, Ubuntu 22.04, 4 cores, 8 GB RAM
-
2x Intel Xeon, E5-2680, 20 cores, 40 vcores, 2.8Ghz, 256 GB RAM, Windows, VMworkstation
- Virtual Machine, Ubuntu 22.04, 8 cores, 16 GB RAM
- Standard (250 epoch, louvre, money) - 599 seconds (605 seconds elapsed)
- Virtual Machine, Ubuntu 22.04, 8 cores, 16 GB RAM
-
AWS p3.2xlarge, AWS p3, Intel Xeon Skylake 8175, 2.5 GHz, 8 vCPU, 61 GB RAM, 1 Tesla V100 GPU, $3.06/hour
- ami-0649417d1ede3c91a, Ubuntu 20.04, tensorflow 2.12.0
- Standard (250 epoch, louvre, monet) - 11 seconds (14 seconds elapsed)
- ami-0649417d1ede3c91a, Ubuntu 20.04, tensorflow 2.12.0
-
AWS t2.large, AWS t2, Intel Xeon E5-2686 v4, 2.30GHz, 2 vCPU, 8 GB RAM, no GPU, $0.093/hour
- ami-0649417d1ede3c91a, Ubuntu 20.04
- Standard (250 epoch, louvre, monet) - 845 seconds (849 seconds elapsed)
- ami-0649417d1ede3c91a, Ubuntu 20.04
-
AWS c5.4xlarge, AWS c5, Intel Platinum 8223CL, 3.0 GHz, 16 vCPU, 32 GB RAM, no GPU, $0.68/hour
- ami-0649417d1ede3c91a, Ubuntu 20.04
- Standard (250 epoch, louvre, monet) - 219 seconds (221 seconds elapsed)
- ami-0649417d1ede3c91a, Ubuntu 20.04
-
AWS inf1.xlarge, AWS inf1, Intel Xeon 8275CL, 3.0 GHz, 4 vCPU, 8 GB RAM, no GPU, $0.228/hour
- ami-0649417d1ede3c91a, Ubuntu 20.04, See FootNote1
- Standard (250 epoch, louvre, monet) - 484 seconds (490 seconds elapsed)
- ami-0649417d1ede3c91a, Ubuntu 20.04, See FootNote1
FootNote1 - ami-0649417d1ede3c91a is probably the wrong AMI to use with this instance type.
Resources - TBD