LLOD = can of worms.
There is a lot of statistical literature out there on how to handle values that might be left-censored at a lower limit of quantitation, but few that apply to LLOD, as that can be defined as the value below which measurement noise is greater than the signal. I would impute values below LLOD as coming from a uniform distribution on [0, LLOD], which happens to have an expected value of LLOD/2. So, your imputation is probably not the issue. What I suspect is that the plot of the data has a hockey-stick appearance, with a long flat area out to the left, where most of the values are either very small or imputed. If you fit a line out there, it is going to have a slope approaching zero, with a relatively large residual SD. When you plug that into the back calculation for t1/2 , you get a really large value.
But think about it, the concentrations out in that area are due to non-single compartment behavior, either slow release from a precursor compartment or some sort of flip-flop kinetics. What my colleagues in the pharmacokinetics group do is to examine the time course curve, and only use the data from the descending portion prior to entering LLOD territory. They set values <LLOD to zero, and any values after the first zero are also set to zero. That more or less restricts the analysis to a single compartment with a single elimination rate constant that is time-independent. I can see their point - I would rather estimate the rate constant with 4 points descending than 104 points where the last 100 are effectively zero.
Now if your time-concentration curve isn't nicely behaved, such as having two (or more) peaks, long flat periods, or increasing concentrations well after initial dose, then noncompartmental analysis may not give you a believable half-life.
SteveDenham
... View more