This vignette shows some details of cohort splitting. It’s probably not very interesting to most people, only those interested in knowing how the SCM technique works in detail. It also uses a lot of non-exported, non-documented functions from plant so you’ll see a lot of plant::: prefixes.

The default cohort introduction times are designed to concentrate cohort introductions onto earlier times, based on empirical patterns of cohort refining:

The actual differences are stepped in order to increase the chance that cohorts from different species will be introduced at the same time and reduce the total amount of work being done.

We can create more refined schedules by interleaving points between these points:

Consider running the SCM and computing seed rain at the end; this is one of the key outputs from the model so a reasonable one to look for differences in.

Seed rain increases as cohorts are introduced more finely, though at a potentially saturating rate. We’re doing lots more work at the more refined end!

## [1] 16.88934 16.87365 17.10484

The differences in seed rain are not actually that striking (perhaps 1%) but in some runs can be more, and the variation creates instabilities.

Where is the fitness difference coming from?

Consider adding a single additional cohort at one of the points along the first vector of times t1 and computing fitness:

The internal function plant:::run_scm_error runs the SCM and computes errors as the integration proceeds; this helps shed some light.

The biggest deviations in output seed rain come about half way through the schedule:

Though because of the compression of early times it’s still fairly early:

Now look at the contribution of different cohorts to see rain (x axis log scaled for clarity). In this case almost all the contribution comes from early cohorts (this is essentially a single-age stand of pioneers). Overlaid on this are the five cohorts with the largest change in total fitness (biggest difference in red). The difference is not coming from seed rain contributions from those cohorts, which is basically zero, though it is higher than the surrounding cohorts.

Next up, need to work out what the fitness contribution of each cohort is.

Then consider the light environment over time. This reconstructs the spline for the light environment for both runs, and computes canopy openness in both and computes the difference in light environments. The resulting image plot is blue in regions where the refined light environment is lighter (higher canopy openness) and red in regions where the the light environment is darker in the refined environment.

ColorBrewer’s RdBu

Because the differences are mostly manifest in the leaf area, we monitor error in both the leaf area and in fitness for all cohorts: (black line indicates the cohort identified as problematic above)

We then run through rounds of refining cohorts until estimated error has decreased down to an appropriate threshold:

Each round, the algorithm looks at the error in the leaf area calculations and in the fitness calculations and refines the worst cohorts, repeating as necessary.

The problem cohorts are still in about the same place, but are much less pronounced (and less so if considering relative error)