Skip to content

Commit a5a1819

Browse files
committed
2 parents 95202d2 + 390595a commit a5a1819

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

46 files changed

+215171
-636
lines changed

.travis.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
dist: xenial
22
sudo: required
33
language: python
4-
cache: pip
54
python:
6-
- 3.6.6
5+
- 3.6.8
76
before_install:
87
- echo -e "machine github.com\n login $GH_TOKEN" > ~/.netrc
98
install:
109
- echo "Installing dependencies"
1110
- sudo apt-get update
12-
- wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
11+
- sudo apt-get install unzip
12+
- wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh --no-verbose
1313
- bash miniconda.sh -b -p $HOME/miniconda
1414
- export PATH="$HOME/miniconda/bin:$PATH"
1515
- hash -r

.travis/README.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
## Dsbox Primitive Unit Test
2+
#### Test pipeline
3+
How To add more testing pipeline (e.g. for new primitive):
4+
1. In `library.py`, following the existed format (like `DefaultClassificationTemplate`) to create a new `Template` class.
5+
2. If the added new pipeline is for new `runType` (you can also check it to ensure). Add the corresponding mapping In dict `DATASET_MAPPER` at line 416 on file `template.py`, follow the format. Ensure it is correct, otherwise the system would failed on finding correct dataset to run.
6+
3. ~~Go to `generate-pipelines-json.py` and add the new class for the import (line 8) part.~~ The system should now import all templates.
7+
4. Add it to `TEMPLATE_LIST` on `generate-pipelines-json.py`.
8+
5. Then, the unit test system will automatically run the new template and generate corresponding `pipeline.json` file that can used to upload as sample pipeline.
9+
10+
#### primitives that do not have pipelines now
11+
1. data preprocessing: `label_encoder`, `greedy_imputation`, `multitable_featurization`
12+
2. `column_fold` and `unfold`
13+
3. Video classification: `LSTM`, `inceptionV3`,
14+
4. concat related: `horizontal concat`,
15+
5. Dataset splitter: `splitter`

.travis/generate-pipelines-json.py

Lines changed: 316 additions & 134 deletions
Large diffs are not rendered by default.

.travis/library.py

Lines changed: 755 additions & 20 deletions
Large diffs are not rendered by default.

.travis/pip_install.sh

Lines changed: 14 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,19 @@
11
#!/bin/bash
2-
pip install -e git+https://gitlab.com/datadrivendiscovery/d3m@a8af7585fdd85e2218ca88b257bb0ec71adabfb3#egg=d3m --progress-bar off
3-
pip install -e git+https://gitlab.com/datadrivendiscovery/common-primitives.git@5c43e65d306a4f36d53db2fb497c9869e2fb7294#egg=common_primitives --progress-bar off
4-
pip install -e git+https://gitlab.com/datadrivendiscovery/sklearn-wrap@dist#egg=sklearn-wrap --progress-bar off
2+
pip install -e git+https://gitlab.com/datadrivendiscovery/d3m@be853095932d4a94bea45da61192a926bfcb1dbd#egg=d3m --progress-bar off
3+
pip install -e git+https://gitlab.com/datadrivendiscovery/common-primitives.git@15e84bff9b310068c071d79b255f3314df183466#egg=common_primitives --progress-bar off
4+
pip install -e git+https://gitlab.com/datadrivendiscovery/sklearn-wrap@4a2cfd1dc749bb13ce807b2bf2436a45cd49c695#egg=sklearn-wrap --progress-bar off
55
pip uninstall -y tensorflow-gpu
66
export LD_LIBRARY_PATH="$HOME/miniconda/envs/ta1-test-env/lib:$LD_LIBRARY_PATH"
7-
pip install tensorflow==1.12.0
7+
pip install tensorflow==2.0.0
88
pip install -e . --progress-bar off
9-
pip install -e git+https://github.com/brekelma/dsbox_corex@master#egg=dsbox_corex --progress-bar off
9+
pip install -e git+https://github.com/brekelma/dsbox_corex@5ebdd6ee66aa5ddb48e3c97d98145586d95c9c1e#egg=dsbox_corex --progress-bar off
1010
pip list
11+
wget https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels.h5 --no-verbose
12+
mv resnet50_weights_tf_dim_ordering_tf_kernels.h5 bdc6c9f787f9f51dffd50d895f86e469cc0eb8ba95fd61f0801b1a264acb4819
13+
wget https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5 --no-verbose
14+
mv vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5 bfe5187d0a272bed55ba430631598124cff8e880b98d38c9e56c8d66032abdc1
15+
wget https://pjreddie.com/media/files/yolov3.weights --no-verbose
16+
mv yolov3.weights 523e4e69e1d015393a1b0a441cef1d9c7659e3eb2d7e15f793f060a21b32f297
17+
wget https://github.com/keras-team/keras-applications/releases/download/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5 --no-verbose
18+
mv resnet50_weights_tf_dim_ordering_tf_kernels.h5 7011d39ea4f61f4ddb8da99c4addf3fae4209bfda7828adb4698b16283258fbe
19+
ls -l

.travis/pipeline_configs/.DS_Store

-6 KB
Binary file not shown.

.travis/pipeline_configs/default_classification.py

Lines changed: 0 additions & 4 deletions
This file was deleted.
Lines changed: 157 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,157 @@
1+
{
2+
"id": "dd2d98ed-5d94-4245-a0c9-0861ed7bc177",
3+
"schema": "https://metadata.datadrivendiscovery.org/schemas/v0/pipeline.json",
4+
"created": "2020-01-24T01:01:01.853055Z",
5+
"inputs": [
6+
{
7+
"name": "input dataset"
8+
}
9+
],
10+
"outputs": [
11+
{
12+
"data": "steps.4.produce",
13+
"name": "predictions of input dataset"
14+
}
15+
],
16+
"steps": [
17+
{
18+
"type": "PRIMITIVE",
19+
"primitive": {
20+
"id": "f31f8c1f-d1c5-43e5-a4b2-2ae4a761ef2e",
21+
"version": "0.2.0",
22+
"python_path": "d3m.primitives.data_transformation.denormalize.Common",
23+
"name": "Denormalize datasets",
24+
"digest": "5ac405757790f53ed8bfdf782ea5805c3d115dca1df1d1479c6478c6d3038340"
25+
},
26+
"arguments": {
27+
"inputs": {
28+
"type": "CONTAINER",
29+
"data": "inputs.0"
30+
}
31+
},
32+
"outputs": [
33+
{
34+
"id": "produce"
35+
}
36+
]
37+
},
38+
{
39+
"type": "PRIMITIVE",
40+
"primitive": {
41+
"id": "4b42ce1e-9b98-4a25-b68e-fad13311eb65",
42+
"version": "0.3.0",
43+
"python_path": "d3m.primitives.data_transformation.dataset_to_dataframe.Common",
44+
"name": "Extract a DataFrame from a Dataset",
45+
"digest": "422744651afd5995d029a227a1dd7b1696038816b7eb9601f37d661757812aee"
46+
},
47+
"arguments": {
48+
"inputs": {
49+
"type": "CONTAINER",
50+
"data": "steps.0.produce"
51+
}
52+
},
53+
"outputs": [
54+
{
55+
"id": "produce"
56+
}
57+
]
58+
},
59+
{
60+
"type": "PRIMITIVE",
61+
"primitive": {
62+
"id": "4503a4c6-42f7-45a1-a1d4-ed69699cf5e1",
63+
"version": "0.3.0",
64+
"python_path": "d3m.primitives.data_transformation.extract_columns_by_semantic_types.Common",
65+
"name": "Extracts columns by semantic type",
66+
"digest": "30cceb9812b430d6550d54766b4f674b68b92531fc2ad63f56818ea002399c13"
67+
},
68+
"arguments": {
69+
"inputs": {
70+
"type": "CONTAINER",
71+
"data": "steps.1.produce"
72+
}
73+
},
74+
"outputs": [
75+
{
76+
"id": "produce"
77+
}
78+
],
79+
"hyperparams": {
80+
"semantic_types": {
81+
"type": "VALUE",
82+
"data": [
83+
"https://metadata.datadrivendiscovery.org/types/PrimaryMultiKey",
84+
"https://metadata.datadrivendiscovery.org/types/FileName"
85+
]
86+
}
87+
}
88+
},
89+
{
90+
"type": "PRIMITIVE",
91+
"primitive": {
92+
"id": "4503a4c6-42f7-45a1-a1d4-ed69699cf5e1",
93+
"version": "0.3.0",
94+
"python_path": "d3m.primitives.data_transformation.extract_columns_by_semantic_types.Common",
95+
"name": "Extracts columns by semantic type",
96+
"digest": "30cceb9812b430d6550d54766b4f674b68b92531fc2ad63f56818ea002399c13"
97+
},
98+
"arguments": {
99+
"inputs": {
100+
"type": "CONTAINER",
101+
"data": "steps.1.produce"
102+
}
103+
},
104+
"outputs": [
105+
{
106+
"id": "produce"
107+
}
108+
],
109+
"hyperparams": {
110+
"semantic_types": {
111+
"type": "VALUE",
112+
"data": [
113+
"https://metadata.datadrivendiscovery.org/types/TrueTarget"
114+
]
115+
}
116+
}
117+
},
118+
{
119+
"type": "PRIMITIVE",
120+
"primitive": {
121+
"id": "dsbox-featurizer-object-detection-yolo",
122+
"version": "1.5.3",
123+
"python_path": "d3m.primitives.feature_extraction.yolo.DSBOX",
124+
"name": "DSBox Object Detection YOLO",
125+
"digest": "2db0c52b7bd9ae94ccfdae549f07a05b936113e59fd07a9ecc4318b5fc3067a2"
126+
},
127+
"arguments": {
128+
"inputs": {
129+
"type": "CONTAINER",
130+
"data": "steps.2.produce"
131+
},
132+
"outputs": {
133+
"type": "CONTAINER",
134+
"data": "steps.3.produce"
135+
}
136+
},
137+
"outputs": [
138+
{
139+
"id": "produce"
140+
}
141+
],
142+
"hyperparams": {
143+
"epochs": {
144+
"type": "VALUE",
145+
"data": 200
146+
},
147+
"use_fitted_weight": {
148+
"type": "VALUE",
149+
"data": false
150+
}
151+
}
152+
}
153+
],
154+
"name": "DefaultObjectDetectionTemplate:140186136032384",
155+
"description": "",
156+
"digest": "862956d95719977f9b0cc485a8742eabb5b2a355b775f9214ea3037281c4d35f"
157+
}

0 commit comments

Comments
 (0)