算法介绍

NeXtVLAD模型是第二届Youtube-8M视频理解竞赛中效果最好的单模型,在参数量小于80M的情况下,能得到高于0.87的GAP指标。该模型提供了一种将桢级别的视频特征转化并压缩成特征向量,以适用于大尺寸视频文件的分类的方法。其基本出发点是在NetVLAD模型的基础上,将高维度的特征先进行分组,通过引入attention机制聚合提取时间维度的信息,这样既可以获得较高的准确率,又可以使用更少的参数量。详细内容请参考NeXtVLAD: An Efficient Neural Network to Aggregate Frame-level Features for Large-scale Video Classification

这里实现了论文中的单模型结构,使用2nd-Youtube-8M的train数据集作为训练集,在val数据集上做测试。

本例采用的是YouTube-8M 2018年更新之后的数据集。使用官方数据集,并将TFRecord文件转化为pickle文件以便PaddlePaddle使用。Youtube-8M数据集官方提供了frame-level和video-level的特征。本例挂靠的数据集为预处理后的数据集, 该数据集为YouTUbe 8M数据集的子集,仅包含5个视频文件,并且训练和测试使用的数据一样,主要用途是模型示例。

下载安装命令

## CPU版本安装命令
pip install -f https://paddlepaddle.org.cn/pip/oschina/cpu paddlepaddle

## GPU版本安装命令
pip install -f https://paddlepaddle.org.cn/pip/oschina/gpu paddlepaddle-gpu

若用户想进行大数据集的训练可按以下步骤操作

数据下载

请使用Youtube-8M官方链接分别下载训练集验证集。每个链接里各提供了3844个文件的下载地址,用户也可以使用官方提供的下载脚本下载数据。数据下载完成后,将会得到3844个训练数据文件和3844个验证数据文件(TFRecord格式)。 假设存放视频模型代码库的主目录为: Code_Root,进入dataset/youtube8m目录

cd dataset/youtube8m

在youtube8m下新建目录tf/train和tf/val

mkdir tf && cd tf

mkdir train && mkdir val

并分别将下载的train和validate数据存放在其中。

数据格式转化

为了适用于PaddlePaddle训练,需要离线将下载好的TFRecord文件格式转成了pickle格式,转换脚本请使用PaddleVideo/tf2pkl.py。

在dataset/youtube8m 目录下新建目录pkl/train和pkl/val

cd dataset/youtube8m

mkdir pkl && cd pkl

mkdir train && mkdir val

转化文件格式(TFRecord -> pkl),进入dataset/youtube8m目录,运行脚本

python tf2pkl.py ./tf/train ./pkl/train

python tf2pkl.py ./tf/val ./pkl/val

分别将train和validate数据集转化为pkl文件。tf2pkl.py文件运行时需要两个参数,分别是数据源tf文件存放路径和转化后的pkl文件存放路径。

备注:由于TFRecord文件的读取需要用到Tensorflow,用户要先安装Tensorflow,或者在安装有Tensorflow的环境中转化完数据,再拷贝到dataset/youtube8m/pkl目录下。为了避免和PaddlePaddle环境冲突,建议先在其他地方转化完成再将数据拷贝过来。

生成文件列表

进入dataset/youtube8m目录

ls $Code_Root/dataset/youtube8m/pkl/train/* > train.list

ls $Code_Root/dataset/youtube8m/pkl/val/* > val.list

在dataset/youtube8m目录下将生成两个文件,train.list和val.list,每一行分别保存了一个pkl文件的绝对路径。

In[  ]
#解压数据集
!cd data/data10073/ && unzip -qo youtube8m.zip
In[  ]
###安装wegt
!pip install wget
Looking in indexes: https://pypi.mirrors.ustc.edu.cn/simple/
Collecting wget
  WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.mirrors.ustc.edu.cn', port=443): Read timed out. (read timeout=15)")': /simple/wget/
  WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.mirrors.ustc.edu.cn', port=443): Read timed out. (read timeout=15)")': /simple/wget/
  WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='mirrors.ustc.edu.cn', port=443): Read timed out. (read timeout=15)")': /pypi/web/simple/wget/
  Downloading https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/47/6a/62e288da7bcda82b935ff0c6cfe542970f04e29c756b0e147251b2fb251f/wget-3.2.zip
Building wheels for collected packages: wget
  Building wheel for wget (setup.py) ... done
  Created wheel for wget: filename=wget-3.2-cp37-none-any.whl size=9680 sha256=41b686828168b16846f26be3d1bf7caebf1f0b191a22fcbc48b4843de2cf71dc
  Stored in directory: /home/aistudio/.cache/pip/wheels/26/28/0d/cd5205dcdeaca81bf62909a7cfd449eaf6698e8ab18992f71a
Successfully built wget
Installing collected packages: wget
Successfully installed wget-3.2
In[20]
#模型训练,模型参数保存在checkpoints中,固化模型保存在freeze_model中
!python PaddleVideo/train.py --model_name="NEXTVLAD" \
                             --config=PaddleVideo/configs/nextvlad.txt \
                             --save_dir=PaddleVideo/checkpoints \
                             --epoch=6 \
                             --batch_size=50 \
                             --valid_interval=10 \
                             --log_interval=100 \
                             --use_gpu=True
[INFO: train.py:  273]: Namespace(batch_size=50, config='PaddleVideo/configs/nextvlad.txt', enable_ce=False, epoch=6, learning_rate=None, log_interval=100, model_name='NEXTVLAD', no_memory_optimize=True, no_use_pyreader=True, pretrain=None, resume=None, save_dir='PaddleVideo/checkpoints', use_gpu=True, valid_interval=10)
[INFO: config.py:   66]: ---------------- Train Arguments ----------------
[INFO: config.py:   68]: MODEL:
[INFO: config.py:   70]:     name:NEXTVLAD
[INFO: config.py:   70]:     num_classes:3862
[INFO: config.py:   70]:     topk:20
[INFO: config.py:   70]:     video_feature_size:1024
[INFO: config.py:   70]:     audio_feature_size:128
[INFO: config.py:   70]:     cluster_size:128
[INFO: config.py:   70]:     hidden_size:2048
[INFO: config.py:   70]:     groups:8
[INFO: config.py:   70]:     expansion:2
[INFO: config.py:   70]:     drop_rate:0.5
[INFO: config.py:   70]:     gating_reduction:8
[INFO: config.py:   70]:     eigen_file:data/data10073/youtube8m/yt8m_pca/eigenvals.npy
[INFO: config.py:   68]: TRAIN:
[INFO: config.py:   70]:     epoch:6
[INFO: config.py:   70]:     learning_rate:0.0002
[INFO: config.py:   70]:     lr_boundary_examples:2000000
[INFO: config.py:   70]:     max_iter:700000
[INFO: config.py:   70]:     learning_rate_decay:0.8
[INFO: config.py:   70]:     l2_penalty:1e-05
[INFO: config.py:   70]:     gradient_clip_norm:1.0
[INFO: config.py:   70]:     use_gpu:True
[INFO: config.py:   70]:     num_gpus:1
[INFO: config.py:   70]:     batch_size:50
[INFO: config.py:   70]:     filelist:data/data10073/youtube8m/train.list
[INFO: config.py:   68]: VALID:
[INFO: config.py:   70]:     batch_size:50
[INFO: config.py:   70]:     filelist:data/data10073/youtube8m/val.list
[INFO: config.py:   68]: TEST:
[INFO: config.py:   70]:     batch_size:1
[INFO: config.py:   70]:     filelist:data/data10073/youtube8m/infer.list
[INFO: config.py:   68]: INFER:
[INFO: config.py:   70]:     batch_size:1
[INFO: config.py:   70]:     filelist:data/data10073/youtube8m/infer.list
[INFO: config.py:   71]: -------------------------------------------------
W1205 12:55:00.057816  1133 device_context.cc:235] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1205 12:55:00.062944  1133 device_context.cc:243] device: 0, cuDNN Version: 7.3.
['train_rgb', 'train_audio']
[INFO: train_utils.py:   30]: ------- learning rate [0.], learning rate counter [-1] -----
I1205 12:55:01.943039  1133 parallel_executor.cc:421] The number of CUDAPlace, which is used in ParallelExecutor, is 1. And the Program will be copied 1 copies
I1205 12:55:01.953335  1133 build_strategy.cc:363] SeqOnlyAllReduceOps:0, num_trainers:1
I1205 12:55:01.962904  1133 parallel_executor.cc:285] Inplace strategy is enabled, when build_strategy.enable_inplace = True
I1205 12:55:01.969626  1133 parallel_executor.cc:368] Garbage collection strategy is enabled, when FLAGS_eager_delete_tensor_gb = 0
[INFO: metrics_util.py:   67]: [TRAIN] Epoch 0, iter 0  , loss = 2943.306885, Hit@1 = 0.00, PERR = 0.00, GAP = 0.00
[INFO: metrics_util.py:   67]: [TRAIN] Epoch 0, iter 100  , loss = 47.940777, Hit@1 = 0.54, PERR = 0.37, GAP = 0.34
[INFO: train_utils.py:  102]: [TRAIN] Epoch 0 training finished, average time: 0.11915348529815674
[INFO: train_utils.py:   30]: ------- learning rate [0.0002], learning rate counter [100] -----
[INFO: metrics_util.py:   67]: [TRAIN] Epoch 1, iter 0  , loss = 45.512417, Hit@1 = 0.44, PERR = 0.25, GAP = 0.23
[INFO: metrics_util.py:   67]: [TRAIN] Epoch 1, iter 100  , loss = 20.124653, Hit@1 = 0.54, PERR = 0.42, GAP = 0.42
[INFO: train_utils.py:  102]: [TRAIN] Epoch 1 training finished, average time: 0.11939586400985717
[INFO: train_utils.py:   30]: ------- learning rate [0.0002], learning rate counter [201] -----
[INFO: metrics_util.py:   67]: [TRAIN] Epoch 2, iter 0  , loss = 22.826988, Hit@1 = 0.48, PERR = 0.32, GAP = 0.30
[INFO: metrics_util.py:   67]: [TRAIN] Epoch 2, iter 100  , loss = 15.705976, Hit@1 = 0.60, PERR = 0.47, GAP = 0.48
[INFO: train_utils.py:  102]: [TRAIN] Epoch 2 training finished, average time: 0.12228986024856567
[INFO: train_utils.py:   30]: ------- learning rate [0.0002], learning rate counter [302] -----
[INFO: metrics_util.py:   67]: [TRAIN] Epoch 3, iter 0  , loss = 17.923090, Hit@1 = 0.54, PERR = 0.39, GAP = 0.35
[INFO: metrics_util.py:   67]: [TRAIN] Epoch 3, iter 100  , loss = 13.640166, Hit@1 = 0.62, PERR = 0.47, GAP = 0.52
[INFO: train_utils.py:  102]: [TRAIN] Epoch 3 training finished, average time: 0.12167703151702881
[INFO: train_utils.py:   30]: ------- learning rate [0.0002], learning rate counter [403] -----
[INFO: metrics_util.py:   67]: [TRAIN] Epoch 4, iter 0  , loss = 15.515293, Hit@1 = 0.58, PERR = 0.40, GAP = 0.40
[INFO: metrics_util.py:   67]: [TRAIN] Epoch 4, iter 100  , loss = 12.135924, Hit@1 = 0.66, PERR = 0.52, GAP = 0.56
[INFO: train_utils.py:  102]: [TRAIN] Epoch 4 training finished, average time: 0.12122331380844116
[INFO: train_utils.py:   30]: ------- learning rate [0.0002], learning rate counter [504] -----
[INFO: metrics_util.py:   67]: [TRAIN] Epoch 5, iter 0  , loss = 14.157166, Hit@1 = 0.60, PERR = 0.43, GAP = 0.44
[INFO: metrics_util.py:   67]: [TRAIN] Epoch 5, iter 100  , loss = 11.061392, Hit@1 = 0.70, PERR = 0.58, GAP = 0.59
[INFO: train_utils.py:  102]: [TRAIN] Epoch 5 training finished, average time: 0.12312133312225342
In[21]
#模型评估
!python PaddleVideo/test.py --model_name="NEXTVLAD" --config=PaddleVideo//configs/nextvlad.txt \
                --log_interval=50 --weights=PaddleVideo/checkpoints/ --use_gpu=True
[INFO: test.py:  151]: Namespace(batch_size=None, config='PaddleVideo//configs/nextvlad.txt', log_interval=50, model_name='NEXTVLAD', use_gpu=True, weights='PaddleVideo/checkpoints/')
[INFO: config.py:   66]: ----------------  Test Arguments ----------------
[INFO: config.py:   68]: MODEL:
[INFO: config.py:   70]:     name:NEXTVLAD
[INFO: config.py:   70]:     num_classes:3862
[INFO: config.py:   70]:     topk:20
[INFO: config.py:   70]:     video_feature_size:1024
[INFO: config.py:   70]:     audio_feature_size:128
[INFO: config.py:   70]:     cluster_size:128
[INFO: config.py:   70]:     hidden_size:2048
[INFO: config.py:   70]:     groups:8
[INFO: config.py:   70]:     expansion:2
[INFO: config.py:   70]:     drop_rate:0.5
[INFO: config.py:   70]:     gating_reduction:8
[INFO: config.py:   70]:     eigen_file:data/data10073/youtube8m/yt8m_pca/eigenvals.npy
[INFO: config.py:   68]: TRAIN:
[INFO: config.py:   70]:     epoch:6
[INFO: config.py:   70]:     learning_rate:0.0002
[INFO: config.py:   70]:     lr_boundary_examples:2000000
[INFO: config.py:   70]:     max_iter:700000
[INFO: config.py:   70]:     learning_rate_decay:0.8
[INFO: config.py:   70]:     l2_penalty:1e-05
[INFO: config.py:   70]:     gradient_clip_norm:1.0
[INFO: config.py:   70]:     use_gpu:True
[INFO: config.py:   70]:     num_gpus:1
[INFO: config.py:   70]:     batch_size:5
[INFO: config.py:   70]:     filelist:data/data10073/youtube8m/train.list
[INFO: config.py:   68]: VALID:
[INFO: config.py:   70]:     batch_size:1
[INFO: config.py:   70]:     filelist:data/data10073/youtube8m/val.list
[INFO: config.py:   68]: TEST:
[INFO: config.py:   70]:     batch_size:1
[INFO: config.py:   70]:     filelist:data/data10073/youtube8m/infer.list
[INFO: config.py:   68]: INFER:
[INFO: config.py:   70]:     batch_size:1
[INFO: config.py:   70]:     filelist:data/data10073/youtube8m/infer.list
[INFO: config.py:   71]: -------------------------------------------------
W1205 12:57:07.670893  1199 device_context.cc:235] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1205 12:57:07.675004  1199 device_context.cc:243] device: 0, cuDNN Version: 7.3.
[INFO: metrics_util.py:   67]: [EVAL] Batch 0 , loss = 5.042916, Hit@1 = 1.00, PERR = 0.50, GAP = 0.64
[INFO: metrics_util.py:   67]: [EVAL] Batch 50 , loss = 3.376050, Hit@1 = 1.00, PERR = 1.00, GAP = 1.00
[INFO: metrics_util.py:   67]: [EVAL] Batch 100 , loss = 6.219836, Hit@1 = 1.00, PERR = 0.50, GAP = 0.61
[INFO: metrics_util.py:   67]: [EVAL] Batch 150 , loss = 9.328680, Hit@1 = 0.00, PERR = 0.00, GAP = 0.00
[INFO: metrics_util.py:   67]: [EVAL] Batch 200 , loss = 10.781406, Hit@1 = 1.00, PERR = 0.50, GAP = 0.50
[INFO: metrics_util.py:   67]: [EVAL] Batch 250 , loss = 7.200578, Hit@1 = 1.00, PERR = 0.75, GAP = 0.95
[INFO: metrics_util.py:   67]: [EVAL] Batch 300 , loss = 13.001336, Hit@1 = 1.00, PERR = 0.67, GAP = 0.67
[INFO: metrics_util.py:   67]: [EVAL] Batch 350 , loss = 17.485695, Hit@1 = 1.00, PERR = 0.57, GAP = 0.66
[INFO: metrics_util.py:   67]: [EVAL] Batch 400 , loss = 10.828633, Hit@1 = 1.00, PERR = 0.67, GAP = 0.67
[INFO: metrics_util.py:   67]: [EVAL] Batch 450 , loss = 10.768428, Hit@1 = 0.00, PERR = 0.00, GAP = 0.22
[INFO: metrics_util.py:   67]: [EVAL] Batch 500 , loss = 12.843693, Hit@1 = 0.00, PERR = 0.00, GAP = 0.03
[INFO: metrics_util.py:   67]: [EVAL] Batch 550 , loss = 15.224954, Hit@1 = 0.00, PERR = 0.00, GAP = 0.00
[INFO: metrics_util.py:   67]: [EVAL] Batch 600 , loss = 17.597933, Hit@1 = 0.00, PERR = 0.33, GAP = 0.17
[INFO: metrics_util.py:   67]: [EVAL] Batch 650 , loss = 4.179424, Hit@1 = 1.00, PERR = 0.50, GAP = 0.83
[INFO: metrics_util.py:   67]: [EVAL] Batch 700 , loss = 30.473232, Hit@1 = 1.00, PERR = 0.43, GAP = 0.43
[INFO: metrics_util.py:   67]: [EVAL] Batch 750 , loss = 2.380761, Hit@1 = 1.00, PERR = 1.00, GAP = 1.00
[INFO: metrics_util.py:   67]: [EVAL] Batch 800 , loss = 3.774885, Hit@1 = 1.00, PERR = 1.00, GAP = 1.00
[INFO: metrics_util.py:   67]: [EVAL] Batch 850 , loss = 9.557622, Hit@1 = 1.00, PERR = 0.67, GAP = 0.83
[INFO: metrics_util.py:   67]: [EVAL] Batch 900 , loss = 13.987719, Hit@1 = 1.00, PERR = 1.00, GAP = 1.00
[INFO: metrics_util.py:   67]: [EVAL] Batch 950 , loss = 4.785729, Hit@1 = 0.00, PERR = 0.00, GAP = 0.25
[INFO: metrics_util.py:   67]: [EVAL] Batch 1000 , loss = 9.739042, Hit@1 = 0.00, PERR = 0.00, GAP = 0.00
[INFO: metrics_util.py:   76]: [EVAL] eval finished. 	avg_hit_at_one: 0.6485148514851485,	avg_perr: 0.4977417906625834,	avg_loss :10.904523887964759,	aps: [0.9177158592027059, 0.8424836475375653, 0.8439369886917671, 0.9511840698500901, 0.9039911275611131, 0.9095888038541853, 0.786752242794184, 0.7320357380812693, 0.8730324934122548, 0.895113905510173, 0.868590466076601, 0.8303536429324336, 0.7243278358537787, 0.7800487692928815, 0.7165748741905942, 0.7976501658785317, 0.7065423439523076, 0.5370374215595649, 0.7422722845207615, 0.6394046931096081, 0.8755971668510368, 0.8077002352948538, 0.9292232377060211, 0.86545505776275, 0.9178382496693696, 0.5881798126768992, 0.7359941801118272, 0.8097570532915362, 0.7954629666858769, 0.8671272246272246, 0.8200237824131833, 0.7094193020719738, 0.8640906346069389, 0.6616019993892355, 0.9029017579128295, 0.3227267052897304, 0.8402272727272728, 0.782659893103972, 0.7641958431432114, 0.886734693877551, 0.734593627503866, 0.5092456004140786, 0.6894817514915361, 0.5275035787929137, 0.47510681871758603, 0.822147531654574, 0.5058080808080808, 0.8676638176638176, 0.790502354788069, 0.28034887980982004, 0.9519230769230769, 0.9204545454545454, 0.7349780701754385, 0.44212962962962954, 0.6543880662020906, 0.7928482687840976, 1.0, 0.8444444444444443, 0.6882936507936508, 0.8928571428571427, 0.39080996884735203, 0.4782436708860759, 0.27869468494468497, 0.452228674103674, 0.38058608058608057, 0.85, 0.29164781272759394, 0.4590702947845805, 0.6906015037593984, 0.738095238095238, 0.9370370370370369, 0.3333333333333333, 0.9013157894736842, 0.319047619047619, 0.7458565244279529, 0.07692307692307693, 0.6337542087542087, 0.9150793650793649, 0.9166666666666666, 0.41009852216748766, 0.571875, 0.31547619047619047, 0.6618876941457588, 0.8375, 0.6555555555555556, 0.1978566661080091, 0.8303571428571428, 0.789486703772418, 0.7767857142857144, 0.31385530751890195, 0.25, 0.4198994252873563, 0.4563388265544647, 0.4444444444444444, 0.6388888888888888, 1.0, 0, 0.4864942528735632, 0.6901945724526369, 0.06851851851851852, 1.0, 0.7329545454545454, 0.7, 0.6555984555984558, 0.5277777777777778, 0.14404761904761904, 0.14820512820512818, 1.0, 0.3827380952380952, 0.5, 0.49904761904761913, 0.2, 0.7678571428571428, 0.07703081232492998, 1.0, 0.16666666666666666, 0.07142857142857142, 0.6805555555555556, 0.6029914529914531, 0.06666666666666667, 1.0, 1.0, 0.3333333333333333, 0.53417004048583, 1.0, 0.3948717948717949, 0.5, 0.014102564102564101, 0.5249999999999999, 0.09129464285714287, 0.547527706734868, 0.615909090909091, 0.07142857142857142, 0.5166833166833167, 0, 1.0, 0.1, 0.8875, 0.6666666666666666, 0.5666666666666667, 0.35, 0.2, 0.1, 0.40262843488649935, 0.02569905155044474, 0.3333333333333333, 0, 0.19696969696969696, 0.1527777777777778, 0.0, 0.05, 0.7152777777777777, 0.7, 0.3742997198879552, 0.75, 0, 0.35, 0.375, 0, 1.0, 0.25, 1.0, 0.3, 0.16666666666666666, 0.24627321603128052, 0, 0.48984126984126986, 0.08424908424908424, 0.7976190476190478, 0.25, 0.5833333333333333, 0.16666666666666666, 0.0, 1.0, 0.375, 0.29967948717948717, 0.0, 0, 0.0, 0.75, 0.13333333333333333, 0, 0.625, 1.0, 0.6041666666666666, 0.5, 0, 0.5, 0.5833333333333333, 0, 0.007575757575757576, 0.4588744588744589, 0, 0.5, 0.08846153846153847, 1.0, 0.3333333333333333, 0, 0, 0, 0.32136752136752134, 0.16666666666666666, 0.5277777777777778, 0.019230769230769232, 0.0, 0.045454545454545456, 0.06666666666666667, 1.0, 0.8055555555555556, 0.07142857142857142, 0.0, 0.25, 0.0, 0, 1.0, 0, 0, 0, 0, 0.08333333333333333, 0, 0.0, 0, 0, 0.0, 0.5, 0.6666666666666666, 0, 0.3666666666666667, 1.0, 0.0, 1.0, 0, 0.5, 0.5, 0, 0.011904761904761904, 0.0, 0.8666666666666666, 0.25, 0.7742424242424243, 0, 0, 0.8333333333333333, 0, 0.0, 0.0625, 1.0, 0.5, 0, 0.25, 0.5, 0.3333333333333333, 0.5, 0, 0.125, 0.0, 0.2, 0.5, 0, 0, 0.16666666666666666, 0.25, 0.5, 0, 1.0, 0, 0.0, 0.048642533936651584, 0, 0, 0.5, 0.0, 1.0, 0.6666666666666666, 0, 0, 0, 0.5, 0, 0, 0.3333333333333333, 0.625, 0, 0, 0, 0, 0, 0.2, 0.05555555555555555, 0.125, 0, 0, 0, 0, 0.0, 0.0, 0, 0.5, 0.125, 0, 0, 0, 0.058823529411764705, 0, 0, 0.36507936507936506, 0, 0, 0, 0, 1.0, 0.2777777777777778, 0.5, 0.5, 0, 0, 0, 0, 0.09090909090909091, 0, 0, 0.25, 0, 0.08333333333333333, 0.25, 0, 0.0, 0.10989010989010989, 0.41666666666666663, 0.8409090909090909, 0, 0.5, 0, 0, 0.05263157894736842, 0.5, 0, 1.0, 0, 0, 0.08333333333333333, 0.09090909090909091, 0, 0, 0.7, 0.14285714285714285, 0.0, 0, 0, 0, 0.5, 0, 0, 0, 0.6666666666666666, 0, 0, 0, 0.0, 0.3333333333333333, 0, 0, 0.39285714285714285, 0.5, 0, 0.8333333333333333, 0.5, 0.2, 1.0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0.1, 0, 1.0, 0.25, 0.39999999999999997, 0, 0, 0, 0.5, 0, 1.0, 0, 0, 0.5, 0.3333333333333333, 0, 0, 0, 0, 0, 0.0, 0, 0, 0, 0, 0, 0, 0.14285714285714285, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0.25, 0, 0, 0.1, 0.0, 0, 0, 0, 0, 0, 0.6666666666666666, 0, 0, 0.0, 0.25, 0, 0, 0, 0, 0, 0, 0, 0.0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0.0, 0, 0, 0, 0, 0.5555555555555556, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.6666666666666666, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 1.0, 0, 0, 0.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0, 0.0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0.8333333333333333, 0.14285714285714285, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.375, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.6428571428571428, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.6666666666666666, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.6666666666666666, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0.29166666666666663, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5555555555555556, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.8333333333333333, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],	gap:0.5137892043909855
In[18]
#模型固化
!python PaddleVideo/freeze.py --model_name="NEXTVLAD" --config=PaddleVideo//configs/nextvlad.txt \
                 --weights=PaddleVideo/checkpoints/ 
freezed
In[19]
#利用固化后的模型进行预测,此处仅打印10例结果, 结果分别为vedio_id,所属类别和概率
!python PaddleVideo/freeze_infer.py --use_gpu='True'
W1205 12:54:32.209031  1073 device_context.cc:235] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 9.2, Runtime API Version: 9.0
W1205 12:54:32.213090  1073 device_context.cc:243] device: 0, cuDNN Version: 7.3.
[b'Eu4t', [5], [0.9621682167053223]]
[b'nC4t', [5], [0.6019442081451416]]
[b'0i4t', [18], [0.1530025154352188]]
[b'kB4t', [0], [0.9081666469573975]]
[b'V04t', [0], [0.5676974058151245]]
[b'mQ4t', [10], [0.09806282818317413]]
[b'kI4t', [14], [0.06756216287612915]]
[b'xr4t', [5], [0.7974911332130432]]
[b'oz4t', [0], [0.7906420826911926]]
[b'1E4t', [2], [0.8823133111000061]]

点击链接,使用AI Studio一键上手实践项目吧:https://aistudio.baidu.com/aistudio/projectdetail/205016 

下载安装命令

## CPU版本安装命令
pip install -f https://paddlepaddle.org.cn/pip/oschina/cpu paddlepaddle

## GPU版本安装命令
pip install -f https://paddlepaddle.org.cn/pip/oschina/gpu paddlepaddle-gpu

>> 访问 PaddlePaddle 官网,了解更多相关内容

09-05 02:29