-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy pathStdSurvLengthCompTest.Rmd
More file actions
405 lines (292 loc) · 18.2 KB
/
StdSurvLengthCompTest.Rmd
File metadata and controls
405 lines (292 loc) · 18.2 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
---
title: "Testing atlantisom: generate survey sampled length comps"
author: "Sarah Gaichas and Christine Stawitz"
date: "`r format(Sys.time(), '%d %B, %Y')`"
output: html_document
bibliography: "packages.bib"
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
# automatically create a bib database for R packages
knitr::write_bib(c(
.packages(), 'knitr', 'rmarkdown', 'tidyr', 'dplyr', 'ggplot2',
'data.table', 'here', 'ggforce', 'ggthemes'
), 'packages.bib')
```
## Introduction
This page documents initial testing of the atlantisom package in development at https://github.com/r4atlantis/atlantisom using three different [Atlantis](https://research.csiro.au/atlantis/) output datasets. Development of atlantisom began at the [2015 Atlantis Summit](https://research.csiro.au/atlantis/atlantis-summit/) in Honolulu, Hawaii, USA.
The purpose of atlantisom is to use existing Atlantis model output to generate input datasets for a variety of models, so that the performance of these models can be evaluated against known (simulated) ecosystem dynamics. Atlantis models can be run using different climate forcing, fishing, and other scenarios. Users of atlantisom will be able to specify fishery independent and fishery dependent sampling in space and time, as well as species-specific catchability, selectivty, and other observation processes for any Atlantis scenario. Internally consistent multispecies and ecosystem datasets with known observation error characteristics will be the atlantisom outputs, for use in individual model performance testing, comparing performance of alternative models, and performance testing of model ensembles against "true" Atlantis outputs.
Initial testing was conducted by S. Gaichas using R scripts in the R folder of this repository that are titled "PoseidonTest_[whatwastested].R". Initial tests are expanded and documented in more detail in these pages. C. Stawitz improved and streamlined the setup and intialization sections.
## Setup
First, you will want to set up libraries and install atlantisom if you haven't already. This document assumes atlantisom is already installed. For complete setup and initialization, please see [TrueBioTest](https://sgaichas.github.io/poseidon-dev/TrueBioTest.html).
This document is written in in R Markdown [@R-rmarkdown], and we use several packages to produce the outputs [@R-tidyr; @R-dplyr; @R-ggplot2; @R-here; @R-ggforce; @R-ggthemes].
```{r message=FALSE, warning=FALSE}
library(tidyr)
require(dplyr)
library(ggplot2)
library(data.table)
library(here)
library(ggforce)
library(ggthemes)
library(atlantisom)
```
## Initialize input files and directories, read in "truth"
Abbreviated here; for a full explanation please see [TrueBioTest](https://sgaichas.github.io/poseidon-dev/TrueBioTest.html). This document assumes that `atlantisom::run_truth` has already completed and stored an .RData file in the atlantis output model directory.
```{r initialize}
initCCA <- TRUE
initNEUS <- FALSE
initNOBA <- FALSE
if(initCCA) source(here("config/CCConfig.R"))
if(initNEUS) source(here("config/NEUSConfig.R"))
if(initNOBA) source(here("config/NOBAConfig.R"))
```
```{r get_names, message=FALSE, warning=FALSE}
#Load functional groups
funct.groups <- load_fgs(dir=d.name,
file_fgs = functional.groups.file)
#Get just the names of active functional groups
funct.group.names <- funct.groups %>%
filter(IsTurnedOn == 1) %>%
select(Name) %>%
.$Name
```
```{r load_Rdata, message=FALSE, warning=FALSE}
if(initCCA) {
truth.file <- "outputCCV3run_truth.RData"
load(file.path(d.name, truth.file))
truth <- result
}
if(initNEUS) {
truth.file <- "outputneusDynEffort_Test1_run_truth.RData"
load(file.path(d.name, truth.file))
truth <- result
}
if(initNOBA){
truth.file <- "outputnordic_runresults_01run_truth.RData"
load(file.path(d.name, truth.file))
truth <- result
}
```
## Simulate a survey part 4: sample for length composition
This section uses the `atlantisom::create_survey()` and `atlantisom::sample_fish()` to get a biological sample dataset. From that dataset, several other functions can be run:
* age samples from `calc_stage2age` and `sample_ages`
* length samples from `calc_age2length` and `sample_lengths` (TO BE WRITTEN)
* weight samples from `calc_age2length` and`sample_weights` (TO BE WRITTEN)
* diet samples from `sample_diet`
Atlantis outputs numbers by cohort (stage-age) and growth informtation, but does not output size of animals directly. The function we are testing here, `atlantisom::calc_age2length` converts numbers by cohort to a length composition. The [workflow originally envisioned](https://onedrive.live.com/?authkey=%21AFQkOoKRz64TLUw&cid=59547B4CB95EF108&id=59547B4CB95EF108%216291&parId=59547B4CB95EF108%216262&o=OneUp) was to create a survey, sample fish from the survey, then apply this function. We determined that it will not work better to create a length comp for the whole population, which could then be sampled. We rethought what we needed [here](https://sgaichas.github.io/poseidon-dev/RethinkSamplingFunctions.html) and test the new functions in this document.
### Standard survey settings for testing
To create a survey, the user specifies the timing of the survey, which species are captured, the spatial coverage of the survey, the species-specific survey efficiency ("q"), and the selectivity at age for each species.
```{r sppgroups}
# make defaults that return a standard survey, implement in standard_survey
# users need to map their species groups into these general ones
# large pelagics/reef associated/burrowers/otherwise non-trawlable
# pelagics
# demersals
# selected flatfish
if(initNOBA) funct.groups <- rename(funct.groups, GroupType = InvertType)
survspp <- funct.groups$Name[funct.groups$IsTurnedOn==1 &
funct.groups$GroupType %in% c("FISH", "SHARK")]
if(initCCA) { #Sarah's CCA Grouping
nontrawl <- c("Shark_C","Yelloweye_rockfish","Benthopel_Fish","Pisciv_S_Fish",
"Pisciv_T_Fish","Shark_D","Shark_P")
pelagics <- c("Pisciv_V_Fish","Demersal_S_Fish","Pacific_Ocean_Perch","Mesopel_M_Fish",
"Planktiv_L_Fish","Jack_mackerel","Planktiv_S_Fish","Pacific_sardine",
"Anchovy","Herring","Pisciv_B_Fish")
demersals <- c("Demersal_P_Fish","Planktiv_O_Fish","Demersal_D_Fish",
"Demersal_DC_Fish","Demersal_O_Fish","Darkblotched_rockfish",
"Demersal_F_Fish","Demersal_E_Fish","Bocaccio_rockfish",
"Demersal_B_Fish","Shark_R","Mesopel_N_Fish","Shark_B","Spiny_dogfish",
"SkateRay")
selflats <- c("Pisciv_D_Fish", "Arrowtooth_flounder","Petrale_sole")
}
if(initNEUS) { # Sarah's NEUS Grouping
nontrawl <- c("Pisciv_T_Fish", "Shark_D", "Shark_P", "Reptile", "Mesopel_M_Fish")
pelagics <- c("Planktiv_L_Fish", "Planktiv_S_Fish", "Benthopel_Fish", "Pisciv_S_Fish")
demersals <- c("Pisciv_D_Fish", "Demersal_D_Fish","Demersal_E_Fish",
"Demersal_S_Fish","Demersal_B_Fish","Demersal_DC_Fish",
"Demersal_O_Fish","Demersal_F_Fish",
"Shark_B", "SkateRay")
selflats <- c("Pisciv_B_Fish")
}
if(initNOBA) { # Sarah's NOBA Grouping
nontrawl <- c("Sharks_other", "Pelagic_large","Mesop_fish")
pelagics <- c("Pelagic_small","Redfish_other","Mackerel","Haddock",
"Saithe","Redfish","Blue_whiting","Norwegian_ssh","Capelin")
demersals <- c("Demersals_other","Demersal_large","Flatfish_other","Skates_rays",
"Green_halibut","North_atl_cod","Polar_cod","Snow_crab")
selflats <- c("Long_rough_dab")
}
```
The following settings are for our example standard survey once per year, most areas, with mixed efficiency and selectivity:
```{r stdbtsurvey-spec, message=FALSE, warning=FALSE}
# general specifications for bottom trawl survey, with items defined above commented out to avoid wasting time loading already loaded files:
# once per year at mid year
# generalized timesteps all models
runpar <- load_runprm(d.name, run.prm.file)
noutsteps <- runpar$tstop/runpar$outputstep
stepperyr <- if(runpar$outputstepunit=="days") 365/runpar$toutinc
midptyr <- round(median(seq(1,stepperyr)))
annualmidyear <- seq(midptyr, noutsteps, stepperyr)
# ~75-80% of boxes (leave off deeper boxes?)
boxpars <- load_box(d.name, box.file)
boxsurv <- c(2:round(0.8*(boxpars$nbox - 1)))
# define bottom trawl mixed efficiency
ef.nt <- 0.01 # for large pelagics, reef dwellers, others not in trawlable habitat
ef.pl <- 0.1 # for pelagics
ef.dm <- 0.7 # for demersals
ef.fl <- 1.1 # for selected flatfish
# bottom trawl survey efficiency specification by species group
effnontrawl <- data.frame(species=nontrawl, efficiency=rep(ef.nt,length(nontrawl)))
effpelagics <- data.frame(species=pelagics, efficiency=rep(ef.pl,length(pelagics)))
effdemersals <- data.frame(species=demersals, efficiency=rep(ef.dm,length(demersals)))
effselflats <- data.frame(species=selflats, efficiency=rep(ef.fl,length(selflats)))
efficmix <- bind_rows(effnontrawl, effpelagics, effdemersals, effselflats)
# mixed selectivity (using 10 agecl for all species)
# flat=1 for large pelagics, reef dwellers, others not in trawlable habitat
# sigmoid 0 to 1 with 0.5 inflection at agecl 3 for pelagics, reaching 1 at agecl 5, flat top
# sigmoid 0 to 1 with 0.5 inflection at agecl 5 for most demersals and flatfish, reaching 1 at agecl 7, flat top
# dome shaped 0 to 1 at agecl 6&7 for selected demersals, falling off to 0.7 by agecl 10
sigmoid <- function(a,b,x) {
1 / (1 + exp(-a-b*x))
}
# survey selectivity specification by species group
selnontrawl <- data.frame(species=rep(nontrawl, each=10),
agecl=rep(c(1:10),length(nontrawl)),
selex=rep(1.0,length(nontrawl)*10))
selpelagics <- data.frame(species=rep(pelagics, each=10),
agecl=rep(c(1:10),length(pelagics)),
selex=sigmoid(5,1,seq(-10,10,length.out=10)))
seldemersals <- data.frame(species=rep(demersals, each=10),
agecl=rep(c(1:10),length(demersals)),
selex=sigmoid(1,1,seq(-10,10,length.out=10)))
selselflats <- data.frame(species=rep(selflats, each=10),
agecl=rep(c(1:10),length(selflats)),
selex=sigmoid(1,1,seq(-10,10,length.out=10)))
selexmix <- bind_rows(selnontrawl, selpelagics, seldemersals, selselflats)
# use this constant 0 cv for testing
surv_cv_0 <- data.frame(species=survspp, cv=rep(0.0,length(survspp)))
# define bottom trawl survey cv by group
cv.nt <- 1.0 # for large pelagics, reef dwellers, others not in trawlable habitat
cv.pl <- 0.5 # for pelagics
cv.dm <- 0.3 # for demersals
cv.fl <- 0.3 # for selected flatfish
# specify cv by species groups
surv_cv_nontrawl <- data.frame(species=nontrawl, cv=rep(cv.nt,length(nontrawl)))
surv_cv_pelagics <- data.frame(species=pelagics, cv=rep(cv.pl,length(pelagics)))
surv_cv_demersals <- data.frame(species=demersals, cv=rep(cv.dm,length(demersals)))
surv_cv_selflats <- data.frame(species=selflats, cv=rep(cv.fl,length(selflats)))
surv_cv_mix <- bind_rows(surv_cv_nontrawl, surv_cv_pelagics, surv_cv_demersals, surv_cv_selflats)
```
Here we use `create_survey` on the numbers output of `run_truth` to create the survey census of age composition.
```{r stdsurveyNbased}
survey_testNstd <- create_survey(dat = truth$nums,
time = annualmidyear,
species = survspp,
boxes = boxsurv,
effic = efficmix,
selex = selexmix)
# consider saving this interim step if it takes a long time go generate
```
Next we apply the aggregateDensityData function can to resn and structn for survey times, species, and boxes.
```{r aggdens}
# aggregate true resn per survey design
survey_aggresnstd <- aggregateDensityData(dat = truth$resn,
time = annualmidyear,
species = survspp,
boxes = boxsurv)
# aggregate true structn per survey design
survey_aggstructnstd <- aggregateDensityData(dat = truth$structn,
time = annualmidyear,
species = survspp,
boxes = boxsurv)
```
Now we should have inputs to `sample_fish` on the same scale, and they need to be aggregated across boxes into a single biological sample for the whole survey. The `sample_fish` applies the median for aggregation and does not apply multinomial sampling if `sample=FALSE` in the function call.
```{r stdsurvey-lensamp, warning=FALSE, message=FALSE}
# define n fish for biological sampling by group
# this could easily be a vector or time series, constant here
ns.nt <- 25 # for large pelagics, reef dwellers, others not in trawlable habitat
ns.pl <- 1000 # for pelagics
ns.dm <- 1000 # for demersals
ns.fl <- 1000 # for selected flatfish
effNnontrawl <- data.frame(species=nontrawl, effN=rep(ns.nt,length(nontrawl)))
effNpelagics <- data.frame(species=pelagics, effN=rep(ns.pl,length(pelagics)))
effNdemersals <- data.frame(species=demersals, effN=rep(ns.dm,length(demersals)))
effNselflats <- data.frame(species=selflats, effN=rep(ns.fl,length(selflats)))
effNmix <- bind_rows(effNnontrawl, effNpelagics, effNdemersals, effNselflats)
# apply default sample fish as before to get numbers
numsstd <- sample_fish(survey_testNstd, effNmix)
#dont sample these, just aggregate them using median (effNmix does nothing)
structnstd <- sample_fish(survey_aggstructnstd, effNmix, sample = FALSE)
resnstd <- sample_fish(survey_aggresnstd, effNmix, sample = FALSE)
```
```{r stdsurvey-calcage2length, warning=FALSE, message=FALSE}
# now cut these down to a single species for testing
atf_numsstd <- numsstd[numsstd$species == "Arrowtooth_flounder",]
atf_structnstd <- structnstd[structnstd$species == "Arrowtooth_flounder",]
atf_resnstd <- resnstd[resnstd$species == "Arrowtooth_flounder",]
atf_length_stdsurv <- calc_age2length(structn = atf_structnstd,
resn = atf_resnstd,
nums = atf_numsstd,
biolprm = truth$biolprm, fgs = truth$fgs,
CVlenage = 0.1, remove.zeroes=TRUE)
```
Plot samples one species:
```{r atflengthsamp1-test}
lfplot <- ggplot(atf_length_stdsurv$natlength, aes(upper.bins)) +
geom_bar(aes(weight = atoutput)) +
theme_tufte()
lfplot + facet_wrap_paginate(~time, ncol=4, nrow = 4, page = 1, scales="free_y")
lfplot + facet_wrap_paginate(~time, ncol=4, nrow = 4, page = 2, scales="free_y")
lfplot + facet_wrap_paginate(~time, ncol=4, nrow = 4, page = 3, scales="free_y")
lfplot + facet_wrap_paginate(~time, ncol=4, nrow = 4, page = 4, scales="free_y")
lfplot + facet_wrap_paginate(~time, ncol=4, nrow = 4, page = 5, scales="free_y")
lfplot + facet_wrap_paginate(~time, ncol=4, nrow = 4, page = 6, scales="free_y")
lfplot + facet_wrap_paginate(~time, ncol=4, nrow = 4, page = 7, scales="free_y")
# try dir = "v" option for vertical lf comparisons
```
## Samples for all species, then SAVE
Now run for all species in the survey.
```{r allspplengthsamp, eval=FALSE}
length_stdsurv <- calc_age2length(structn = structnstd,
resn = resnstd,
nums = numsstd,
biolprm = truth$biolprm, fgs = truth$fgs,
CVlenage = 0.1, remove.zeroes=TRUE)
#save for later use, takes a long time to generate
saveRDS(length_stdsurv, file.path(d.name, paste0(scenario.name, "length_stdsurv.rds")))
```
Runtime for census lengths was 5.5 hours for CCA. For standard survey lengths runtime was not reduced because the dimensions (species, timesteps) were identical.
Compare sampled lf with census from [here](https://sgaichas.github.io/poseidon-dev/CCATrueLengthCompTest.html):
```{r loadlengthcomp, echo=TRUE}
length_censussurvsamp <- readRDS(file.path(d.name, paste0(scenario.name, "length_censussurvsamp.rds")))
length_stdsurv <- readRDS(file.path(d.name, paste0(scenario.name, "length_stdsurv.rds")))
```
Comparison plots for other species. Blue bars are the census length sample and red outlines are the survey sample (standardized for comparison; census total n lengths is 1e+8 while sample total n lengths is 1000):
```{r moreplots, echo=TRUE}
# select a species
ss_length_censussurvsamp <- length_censussurvsamp$natlength %>%
filter(species == "Herring")
ss_length_stdsurv <- length_stdsurv$natlength %>%
filter(species == "Herring")
# make proportion at length to plot together
lf_census_tot <- aggregate(ss_length_censussurvsamp$atoutput,list(ss_length_censussurvsamp$time),sum )
names(lf_census_tot) <- c("time","totlen")
lf_census_prop <- merge(ss_length_censussurvsamp, lf_census_tot)
lf_samp_tot <- aggregate(ss_length_stdsurv$atoutput,list(ss_length_stdsurv$time),sum )
names(lf_samp_tot) <- c("time","totsamp")
lf_samp_prop <- merge(ss_length_stdsurv, lf_samp_tot)
# add sample, this is census, probably want it first?
lfplot <- ggplot(mapping=aes(x=upper.bins)) +
geom_bar(data=lf_census_prop, aes(weight = atoutput/totlen), fill="blue") +
geom_bar(data=lf_samp_prop, aes(weight = atoutput/totsamp), colour="red") +
theme_tufte()
lfplot + facet_wrap_paginate(~time, ncol=4, nrow = 4, page = 1, scales="free_y")
lfplot + facet_wrap_paginate(~time, ncol=4, nrow = 4, page = 2, scales="free_y")
lfplot + facet_wrap_paginate(~time, ncol=4, nrow = 4, page = 3, scales="free_y")
lfplot + facet_wrap_paginate(~time, ncol=4, nrow = 4, page = 4, scales="free_y")
lfplot + facet_wrap_paginate(~time, ncol=4, nrow = 4, page = 5, scales="free_y")
lfplot + facet_wrap_paginate(~time, ncol=4, nrow = 4, page = 6, scales="free_y")
lfplot + facet_wrap_paginate(~time, ncol=4, nrow = 4, page = 7, scales="free_y")
```
Here as with [cohort sampling](https://sgaichas.github.io/poseidon-dev/StdSurvLengthAgeCompTest.html), the impact of survey selectivity is much larger than sampling error with effN of 1000. We could try effN that is lower, but 1000 lengths may be reasonable from a survey. A `sample_lengths` function could add further measuremet error on top of this, if desired (though measurement error at the 1 cm level may be negligible).
## References