Thank you for the clarifications.
I see several issues here.
One issue is inflated Type I error. Notably "Adding sample size to an already completed experiment in order to increase power will increase the Type I error rate (alpha) unless extraordinary measures are taken...." http://depts.washington.edu/oawhome/wordpress/wp-content/uploads/2013/10/Improved_Stopping_Rules.pdf (p 3).
Another issue is that it looks like you have only one realization of the intervention. Although n=1 provides some information, it is merely anecdotal; to demonstrate the impact of the intervention, you would need independent replication.
Another issue is that these observations through time may not be independent, i.e., there may be autocorrelation. Lack of independence reduces the effective degrees of freedom--each observation does not carry a full complement of information. This impacts your assessment of sample size when sample size is the number of observations.
Another issue is the use of statistics derived from the dataset to estimate the sample size desired for that same dataset. This problem is addressed in the retrospective power analysis literature.
So, in summary and in my opinion, although you could retrospectively estimate the number of post observations need to detect a particular effect, the value of that effort is minimal and does not advance your effort to assess the impact of intervention in any meaningful way.
... View more