Conversation
jacobcook1995
left a comment
There was a problem hiding this comment.
Hi Sally, just had a couple of comments, I think they mainly stem from me not actually explaining the pools going into litter properly
| total_deadwood_mass = np.sum(mortality * community.stem_allometry.stem_mass) | ||
| self.data["deadwood_production"][cell_id] = total_deadwood_mass * ( | ||
| 1 - self.model_constants.percent_stem_mass_attributed_to_lignin | ||
| ) |
There was a problem hiding this comment.
I think I explained this badly sorry! deadwood_production should be the total mass of wood and deadwood_lignin is the proportion of the wood in deadwood_production that is lignin
There was a problem hiding this comment.
Ahhh makes sense! In that case I think I'll just remove the references to deadwood_lignin and update that with the rest of #748
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #765 +/- ##
===========================================
+ Coverage 94.59% 94.61% +0.01%
===========================================
Files 74 74
Lines 5087 5104 +17
===========================================
+ Hits 4812 4829 +17
Misses 275 275 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
davidorme
left a comment
There was a problem hiding this comment.
A minor complaint about terms and then I think you are right that there is an issue with small numbers. I think drawing mortality stochastically using a binomial distribution is the way forward.
| per_stem_annual_mortality_rate: float = 0.1 | ||
| """Basic annual mortality rate for plants.""" | ||
|
|
There was a problem hiding this comment.
This is fussy - particularly in a temporary simple implementation - but a mortality rate is usually given as a number out of a given population size (5.7 / 1000 for example), where the per_stem_annual_mortality_probability is the probability per stem (so is in [0,1]). Probably better to stick with using probability not rate.
| mortality = np.round( | ||
| cohorts.n_individuals * self.per_update_interval_stem_mortality_rate | ||
| ) |
There was a problem hiding this comment.
I see the problem with the no mortality at small sizes. I think we should do this statistically:
mortality = np.random.binomial(cohorts.n_individuals, self.per_update_interval_stem_mortality_rate)The binomial distribution gives the number of "successes" (deaths) out of a number of trials (number of stems) for the given per stem probability of success. This way we get stochastic mortality and every stem could die even with low probabilities.
There is absolutely the right way to do background mortality, but the down side to this is that it makes the testing more problematic because the outcomes are no longer deterministic. We could do something crude - to check that after mortality the number of individuals is <= the previous state or use large enough populations that the chance of = is vanishingly unlikely.
We can also seed the random number generator - the distribute_monthly_rainfall function in the hydrology model does this.
Co-authored-by: David Orme <davidorme@users.noreply.github.com>
davidorme
left a comment
There was a problem hiding this comment.
This looks good to me apart from the slight issue of me not knowing any maths...
| # Calculate the per update interval stem mortality rate | ||
| self.per_update_interval_stem_mortality_probability = ( | ||
| model_constants.per_stem_annual_mortality_probability | ||
| / self.model_timing.updates_per_year | ||
| ) | ||
|
|
There was a problem hiding this comment.
Ah. Now... It turns out this is wrong and we need to treat it like compound interest. Which was obvious after this was gently pointed out to me at breakfast by someone who actually works with mortality data.
import numpy as np
import matplotlib.pyplot as plt
annual_p_death = 0.1
n_steps = 12
per_step_p_death = 1 - (1 - annual_p_death) ** (1 / n_steps)
n_indiv_annual = np.full((10000,), fill_value=1e6, dtype="int")
n_indiv_annual -= np.random.binomial(n_indiv_annual, annual_p_death)
n_indiv_naive = np.full((10000,), fill_value=1e6, dtype="int")
for month in np.arange(n_steps):
n_indiv_naive -= np.random.binomial(n_indiv_naive, annual_p_death / 12)
n_indiv_compound = np.full((10000,), fill_value=1e6, dtype="int")
for month in np.arange(n_steps):
n_indiv_compound -= np.random.binomial(n_indiv_compound, per_step_p_death)
plt.hist(n_indiv_compound, label="compound")
plt.hist(n_indiv_naive, label="naive")
plt.hist(n_indiv_annual, label="annual")
plt.legend()
plt.show()There was a problem hiding this comment.
Oh! Well that plot is quite convincing! Will make the change
Description
Adding in basic mortality. One question I have is how to deal with the potential of no trees dying if the numbers are too small.
If your update interval is 1 week and you use 10% annual mortality, the mortality PER update interval is 0.19%, so if you don't have a very sizeable tree population, it's possible that it will just always round down and no trees will ever die.
Should I tried to do something fancier here, or is this simple version ok for now with acknowledged?
Fixes #747
Type of change
Key checklist
pre-commitchecks:$ pre-commit run -a$ poetry run pytestFurther checks