When looking at goodness of fit statistics, what you are determining is the probability of whether the data does not fit the specific distribution. In Charles Annis page on Anderson-Darling, his Note 1 does a good job of explaining what you are really looking at. You really are not supporting that you have a normal distribution, you are supporting the probability that you do not, and for your data using that one calculation you are not rejecting that you do not have a normal distribution - a much different term than concluding you do.
The process I use is the "Distribution Analyzer" , which calculates the goodness of fit for a series of common distributions, and gives you the result of the distribution with the highest p-value. (I believe this is similar to the Pearson analysis.) The logic then behind that is the highest p-value curve has a lowest probability of being rejected as the specific curve. So, it looks at several distributions, rather than making a judgment toward just one.
That is why I prefer it. As you can see, the tool also permits me to show the data and curves for any of the distributions - which is handy. If the best curve p value is very close to the normal curve's p-values, I may assume that it is "normal enough' to suit my statistical intentions.