cv.Dataset/Dataset - MATLAB File Help |
Constructor
ds = cv.Dataset(dstype)
Implements loading dataset: "HMDB: A Large Human Motion Database" Link
Usage:
From link above download dataset files: hmdb51_org.rar
&
test_train_splits.rar
.
Unpack them. Unpack all archives from directory: hmdb51_org/
and remove them.
To load data run:
ds = cv.Dataset('AR_hmdb');
ds.load('/home/user/path_to_unpacked_folders/');
Benchmark:
Note:
/home/user/path_to_unpacked_folders/hmdb51_org_stips/
.hmdb51_org_stips/
and remove them.Implements loading dataset: "Sports-1M Dataset" Link
Usage:
From link above download dataset files.
To load data run:
ds = cv.Dataset('AR_sports');
ds.load('/home/user/path_to_downloaded_folders/');
Implements loading dataset: "Adience" Link
Usage:
From link above download any dataset file:
faces.tar.gz\aligned.tar.gz
and files with splits:
fold_0_data.txt-fold_4_data.txt
,
fold_frontal_0_data.txt-fold_frontal_4_data.txt
. (For face
recognition task another splits should be created)
Unpack dataset file to some folder and place split files into the same folder.
To load data run:
ds = cv.Dataset('FR_adience');
ds.load('/home/user/path_to_created_folder/');
Implements loading dataset: "Labeled Faces in the Wild" Link
Usage:
From link above download any dataset file:
lfw.tgz\lfwa.tar.gz\lfw-deepfunneled.tgz\lfw-funneled.tgz
and files with pairs: 10 test splits: pairs.txt
and
developer train split: pairsDevTrain.txt
.
Unpack dataset file and place pairs.txt
and
pairsDevTrain.txt
in created folder.
To load data run:
ds = cv.Dataset('FR_lfw');
ds.load('/home/user/path_to_unpacked_folder/lfw2/');
Benchmark:
pairsDevTrain.txt
,
dataset: lfwa)Implements loading dataset: "ChaLearn Looking at People" Link
Usage
Follow instruction from site above, download files for dataset
"Track 3: Gesture Recognition": Train1.zip
-Train5.zip
,
Validation1.zip
-Validation3.zip
(Register on
site and accept the
terms
and conditions of competition. There are three mirrors for
downloading dataset files. When I downloaded data only mirror:
"Universitat Oberta de Catalunya" works).
Unpack train archives Train1.zip
-Train5.zip
to folder
Train/
, validation archives
Validation1.zip
-Validation3.zip
to folder Validation/
Unpack all archives in Train/
& Validation/
in the folders
with the same names, for example: Sample0001.zip
to
Sample0001/
To load data run:
ds = cv.Dataset('GR_chalearn');
ds.load('/home/user/path_to_unpacked_folders/');
Implements loading dataset: "Sheffield Kinect Gesture Dataset" Link
Usage:
From link above download dataset files:
subject1_dep.7z
-subject6_dep.7z
,
subject1_rgb.7z
-subject6_rgb.7z
.
Unpack them.
To load data run:
ds = cv.Dataset('GR_skig');
ds.load('/home/user/path_to_unpacked_folders/');
Implements loading dataset: "HumanEva Dataset" Link
Usage:
From link above download dataset files for HumanEva-I
(tar)
and HumanEva-II
.
Unpack them to HumanEva_1
and HumanEva_2
accordingly.
To load data run:
ds = cv.Dataset('HPE_humaneva');
ds.load('/home/user/path_to_unpacked_folders/');
Implements loading dataset: "PARSE Dataset" Link
Usage:
From link above download dataset file: people.zip
.
Unpack it.
To load data run:
ds = cv.Dataset('HPE_parse');
ds.load('/home/user/path_to_unpacked_folder/people_all/');
Implements loading dataset: "Affine Covariant Regions Datasets" Link
Usage:
From link above download dataset files:
bark\bikes\boat\graf\leuven\trees\ubc\wall.tar.gz
.
Unpack them.
To load data, for example, for "bark", run:
ds = cv.Dataset('IR_affine');
ds.load('/home/user/path_to_unpacked_folder/bark/');
Implements loading dataset: "Robot Data Set, Point Feature Data Set - 2010" Link
Usage:
From link above download dataset files:
SET001_6.tar.gz
-SET055_60.tar.gz
Unpack them to one folder.
To load data run:
ds = cv.Dataset('IR_robot');
ds.load('/home/user/path_to_unpacked_folder/');
Implements loading dataset: "The Berkeley Segmentation Dataset and Benchmark" Link
Usage:
From link above download dataset files:
BSDS300-human.tgz
& BSDS300-images.tgz
.
Unpack them.
To load data run:
ds = cv.Dataset('IS_bsds');
ds.load('/home/user/path_to_unpacked_folder/BSDS300/');
Implements loading dataset: "Weizmann Segmentation Evaluation Database" Link
Usage:
From link above download dataset files:
Weizmann_Seg_DB_1obj.ZIP
& Weizmann_Seg_DB_2obj.ZIP
.
Unpack them.
To load data, for example, for 1 object
dataset, run:
ds = cv.Dataset('IS_weizmann');
ds.load('/home/user/path_to_unpacked_folder/1obj/');
Implements loading dataset: "EPFL Multi-View Stereo" Link
Usage:
From link above download dataset files:
castle_dense\castle_dense_large\castle_entry\fountain\herzjesu_dense\herzjesu_dense_large_bounding\cameras\images\p.tar.gz
.
Unpack them in separate folder for each object. For example,
for "fountain", in folder fountain/
:
fountain_dense_bounding.tar.gz -> bounding/
,
fountain_dense_cameras.tar.gz -> camera/
,
fountain_dense_images.tar.gz -> png/
,
fountain_dense_p.tar.gz -> P/
To load data, for example, for "fountain", run:
ds = cv.Dataset('MSM_epfl');
ds.load('/home/user/path_to_unpacked_folder/fountain/');
Implements loading dataset: "Stereo - Middlebury Computer Vision" Link
Usage:
From link above download dataset files:
dino\dinoRing\dinoSparseRing\temple\templeRing\templeSparseRing.zip
Unpack them.
To load data, for example "temple" dataset, run:
ds = cv.Dataset('MSM_middlebury');
ds.load('/home/user/path_to_unpacked_folder/temple/');
Implements loading dataset: "ImageNet" Link
Usage:
From link above download dataset files:
ILSVRC2010_images_train.tar
, ILSVRC2010_images_test.tar
,
ILSVRC2010_images_val.tar
& devkit:
ILSVRC2010_devkit-1.0.tar.gz
(Implemented loading of 2010
dataset as only this dataset has ground truth for test data,
but structure for ILSVRC2014 is similar)
Unpack them to: some_folder/train/
, some_folder/test/
,
some_folder/val
&
some_folder/ILSVRC2010_validation_ground_truth.txt
,
some_folder/ILSVRC2010_test_ground_truth.txt
.
Create file with labels: some_folder/labels.txt
, for
example, using python script below (each file's row format:
synset,labelID,description
. For example:
"n07751451,18,plum").
Unpack all tar files in train.
To load data run:
ds = cv.Dataset('OR_imagenet');
ds.load('/home/user/some_folder/');
Python script to parse meta.mat
:
import scipy.io
meta_mat = scipy.io.loadmat("devkit-1.0/data/meta.mat")
labels_dic = dict((m[0][1][0], m[0][0][0][0]-1) for m in meta_mat['synsets']
label_names_dic = dict((m[0][1][0], m[0][2][0]) for m in meta_mat['synsets']
for label in labels_dic.keys():
print "{0},{1},{2}".format(label, labels_dic[label], label_names_dic[label])
Implements loading dataset: "MNIST" Link
Usage:
From link above download dataset files:
t10k-images-idx3-ubyte.gz
, t10k-labels-idx1-ubyte.gz
,
train-images-idx3-ubyte.gz
, train-labels-idx1-ubyte.gz
.
Unpack them.
To load data run:
ds = cv.Dataset('OR_mnist');
ds.load('/home/user/path_to_unpacked_files/');
Implements loading dataset: "SUN Database, Scene Recognition Benchmark. SUN397" Link
Usage:
From link above download dataset file: SUN397.tar
& file
with splits: Partitions.zip
Unpack SUN397.tar
into folder: SUN397/
& Partitions.zip
into folder: SUN397/Partitions/
To load data run:
ds = cv.Dataset('OR_sun');
ds.load('/home/user/path_to_unpacked_files/SUN397/');
Implements loading dataset: "Caltech Pedestrian Detection Benchmark" Link
Usage:
From link above download dataset files:
set00.tar
-set10.tar
.
Unpack them to separate folder.
To load data run:
ds = cv.Dataset('PD_caltech');
ds.load('/home/user/path_to_unpacked_folders/');
Note:
Implements loading dataset: "KITTI Vision Benchmark" Link
Usage:
From link above download "Odometry" dataset files:
data_odometry_gray
, data_odometry_color
,
data_odometry_velodyne
, data_odometry_poses
,
data_odometry_calib.zip
.
Unpack data_odometry_poses.zip
, it creates folder
dataset/poses/
. After that unpack data_odometry_gray.zip
,
data_odometry_color.zip
, data_odometry_velodyne.zip
.
Folder dataset/sequences/
will be created with folders
00/..21/
. Each of these folders will contain: image_0/
,
image_1/
, image_2/
, image_3/
, velodyne/
and files
calib.txt
& times.txt
. These two last files will be
replaced after unpacking data_odometry_calib.zip
at the end.
To load data run:
ds = cv.Dataset('SLAM_kitti');
ds.load('/home/user/path_to_unpacked_folder/dataset/');
Implements loading dataset: "TUMindoor Dataset" Link
Usage:
From link above download dataset files:
dslr\info\ladybug\pointcloud.tar.bz2
for each dataset:
11-11-28 (1st floor)
, 11-12-13 (1st floor N1)
,
11-12-17a (4th floor)
, 11-12-17b (3rd floor)
,
11-12-17c (Ground I)
, 11-12-18a (Ground II)
,
11-12-18b (2nd floor)
Unpack them in separate folder for each dataset.
dslr.tar.bz2 -> dslr/
, info.tar.bz2 -> info/
,
ladybug.tar.bz2 -> ladybug/
,
pointcloud.tar.bz2 -> pointcloud/
.
To load each dataset run:
ds = cv.Dataset('SLAM_tumindoor');
ds.load('/home/user/path_to_unpacked_folders/');
Implements loading dataset: "The Chars74K Dataset" Link
Usage:
From link above download dataset files:
EnglishFnt\EnglishHnd\EnglishImg\KannadaHnd\KannadaImg.tgz
,
ListsTXT.tgz
.
Unpack them.
Move .m
files from folder ListsTXT/
to appropriate folder.
For example, English/list_English_Img.m
for EnglishImg.tgz
.
To load data, for example "EnglishImg", run:
ds = cv.Dataset('TR_chars');
ds.load('/home/user/path_to_unpacked_folder/English/');
Implements loading dataset: "The Street View Text Dataset" Link
Usage:
From link above download dataset file: svt.zip
.
Unpack it.
To load data run:
ds = cv.Dataset('TR_svt');
ds.load('/home/user/path_to_unpacked_folder/svt/svt1/');
Benchmark:
Implements loading dataset: VOT 2015 Link
VOT 2015 dataset comprises 60 short sequences showing various objects in challenging backgrounds. The sequences were chosen from a large pool of sequences including the ALOV dataset, OTB2 dataset, non-tracking datasets, Computer Vision Online, Professor Bob Fisher's Image Database, Videezy, Center for Research in Computer Vision, University of Central Florida, USA, NYU Center for Genomics and Systems Biology, Data Wrangling, Open Access Directory and Learning and Recognition in Vision Group, INRIA, France. The VOT sequence selection protocol was applied to obtain a representative set of challenging sequences.
Usage:
From link above download dataset file: vot2015.zip
Unpack vot2015.zip
into folder: VOT2015/
To load data run:
ds = cv.Dataset('TRACK_vot');
ds.load('/home/user/path_to_unpacked_files/VOT2015/');
Implements loading daataset: ALOV++ Link