More than likely, your program is trying to take a log of 0 (including lgamma of 0), or creating infinity. For instance, with alpha=0 (starting value), m=1/alpha will cause big problems (infinity). Also, there are log or lgamma functions that involve alpha (which will also blow up with 0 for alpha). I took your data and added a few more dummy data points. The program ran when I used 1 for the starting value of alpha. But this is still dangerous, because parameter estimates can migrate to 'impossible' regions during the optimization. You should look into putting bounds on the parameters. Check out the syntax of the bounds statement.
(I did not really look at the rest of the code to see if the customized ll is correct).
The hurdle log-likelihood function does appear sound. As lvm has indicated, computing 1/alpha for alpha initialized to 0 is a problem.
Personally, I prefer not to use bounds statements to control acceptable values for the parameters. Derivatives are usually not defined when parameter estimates are fixed at boundary values. Sometimes, the function itself is not defined when a parameter takes on the boundary value. (Such is the case for the NB distribution when you specify alpha=0. Even though the NB is a Poisson when alpha=0, it is not possible to compute the Poisson density using a negative binomial density function.)
Instead of using boundary constraints, consider parameterizing the model so that the parameter estimates are always in an acceptable range. For instance, in the NB distribution, the parameter alpha must always be positive. If you name log_alpha as a parameter and then compute alpha=exp(log_alpha), then alpha>0 for -infinity
Dale makes a good point: it is best to avoid using parameter bounds, when possible. For your model, it should be possible to avoid explicit bounds, using the recommendations from Dale. With some nonlinear models, it can be challenging to do the optimization without using bounds, but always a good goal.
So the standard errors for the hurdle part of the model are huge! I can play around with the starting value of log_alpha and get them somewhat better but they are still huge. I also tried changing the hurdle model. Right now it is log((exp(zeromodel)) /(1+(exp(zeromodel))). Based on the SAS for mixed models by Littell et al's hurdle poisson I changed it to -exp(zeromodel) but that did not change the outcome.