Skip to content

Commit bda1d23

Browse files
author
Benoît Fonty
committed
feature: genereration des mutations directement a partir de des fichiers dvf et ajout de 2020
1 parent e6a296f commit bda1d23

File tree

5 files changed

+34
-6
lines changed

5 files changed

+34
-6
lines changed

.gitmodules

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
[submodule "db/dvf"]
2+
path = db/dvf
3+
url = [email protected]:etalab/dvf.git

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,7 @@ Creation de la base de données et import des données :
6868
Le script commence par créer une base de données PostgreSQL et une table, puis télécharge les données DVF retraitées par Etalab, disponibles [ici](https://github.com/etalab/dvf/). Enfin quelques post-traitements sont effectués (traitement de quelques minutes).
6969

7070
```bash
71+
$ git submodule update --init
7172
$ sh db/build_db.sh
7273
```
7374

db/build_db.sh

100644100755
Lines changed: 28 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,26 +3,49 @@
33
DIR=$(echo $(dirname $0))
44
cd $DIR
55

6+
7+
declare -A dvf_datasets
8+
dvf_datasets[2015]="09f013c5-9531-444b-ab6c-7a0e88efd77d"
9+
dvf_datasets[2016]="0ab442c5-57d1-4139-92c2-19672336401c"
10+
dvf_datasets[2017]="7161c9f2-3d91-4caf-afa2-cfe535807f04"
11+
dvf_datasets[2018]="1be77ca5-dc1b-4e50-af2b-0240147e0346"
12+
dvf_datasets[2019]="3004168d-bec4-44d9-a781-ef16f41856a2"
13+
dvf_datasets[2020]="90a98de0-f562-4328-aa16-fe0dd1dca60f"
14+
15+
datasets_url="https://www.data.gouv.fr/fr/datasets/r/"
16+
617
sudo -u postgres psql -c "DROP DATABASE IF EXISTS dvf_202004;"
718
sudo -u postgres psql -c "CREATE DATABASE dvf_202004;"
819
sudo -u postgres psql -c "ALTER DATABASE dvf_202004 SET datestyle TO ""ISO, DMY"";"
920
sudo -u postgres psql -d dvf_202004 -f "create_table.sql"
1021

1122
# Chargement des données sur le serveur
12-
DATADIR="data"
23+
DATADIR="dvf/data"
1324
mkdir -p $DATADIR
1425

15-
for YEAR in 2014 2015 2016 2017 2018 2019
26+
for YEAR in 2015 2016 2017 2018 2019 2020
1627
do
17-
[ ! -f $DATADIR/full_$YEAR.csv.gz ] && wget -r -np -nH -N --cut-dirs 5 https://cadastre.data.gouv.fr/data/etalab-dvf/latest/csv/$YEAR/full.csv.gz -O $DATADIR/full_$YEAR.csv.gz
28+
[ ! -f $DATADIR/valeursfoncieres-$YEAR.txt.gzip ] && wget -r -np -nH -N --cut-dirs 5 ${datasets_url}${dvf_datasets[${YEAR}]} -O $DATADIR/valeursfoncieres-$YEAR.txt && \
29+
echo gzipping file $DATADIR/valeursfoncieres-$YEAR.txt as the improve-csv script needs zip file as an input && \
30+
gzip -f $DATADIR/valeursfoncieres-$YEAR.txt
1831
done
1932

20-
find $DATADIR -name '*.gz' -exec gunzip -f '{}' \;
33+
cd dvf
34+
npm install
35+
cd ..
36+
37+
export ANNEES=2015,2016,2017,2018,2019,2020
38+
export COG_MILLESIME="2020"
39+
export CADASTRE_MILLESIME="2020-07-01"
40+
export DISABLE_GEOCODING="1"
41+
node --max-old-space-size=8192 dvf/improve-csv
2142

2243
#Chargement des données dans postgres
2344
DATAPATH=$( cd $DATADIR ; pwd -P )
24-
for YEAR in 2014 2015 2016 2017 2018 2019
45+
for YEAR in 2015 2016 2017 2018 2019 2020
2546
do
47+
mv dvf/dist/${YEAR}/full.csv.gz $DATAPATH/full_$YEAR.csv.gz
48+
gunzip -f $DATAPATH/full_$YEAR.csv.gz
2649
sudo -u postgres psql -d dvf_202004 -c "COPY dvf FROM '$DATAPATH/full_$YEAR.csv' delimiter ',' csv header encoding 'UTF8';"
2750
done
2851

db/dvf

Submodule dvf added at b8a62c4

static/js/index.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ vue.$watch('fold_left', function () {
2525
// Définition des variables globales
2626

2727
var MIN_DATE = '2015-01-01'
28-
var MAX_DATE = '2019-12-31'
28+
var MAX_DATE = '2020-06-30'
2929

3030
var map = null;
3131
var mapLoaded = false;

0 commit comments

Comments
 (0)