Files
pytorch-tutorial/tutorials/09 - Image Captioning
2017-03-21 20:05:51 +09:00
..
2017-03-21 20:01:47 +09:00
2017-03-21 20:01:47 +09:00
2017-03-21 20:01:47 +09:00
2017-03-21 20:05:51 +09:00
2017-03-21 20:01:47 +09:00
2017-03-21 20:01:47 +09:00
2017-03-21 20:01:47 +09:00
2017-03-21 01:05:47 +09:00

Usage

1. Clone the repositories

$ git clone https://github.com/pdollar/coco.git
$ git clone https://github.com/yunjey/pytorch-tutorial.git
$ cd pytorch-tutorial/tutorials/09 - Image Captioning

2. Download the dataset

$ pip install -r requirements
$ chmod +x download.sh
$ ./donwload.sh

3. Preprocessing

$ python vocab.py    

4. Train the model

$ python train.py    

5. Generate captions

If you want to generate captions from MSCOCO validation dataset, see evaluate_model.ipynb. Otherwise, if you want to generate captions from custom image file, run command as below.

$ python sample.py --image=sample_image.jpg     

Pretrained model

If you do not want to train the model yourself, you can use a pretrained model. I have provided the pretrained model as a zip file. You can download the file here and extract it to model directory.