Question (1) isn't a question. It's a statement. Unless you meant for it to have a question mark on the end. If so, the reason p-values are halved, is because you are looking for the chance that you will get a maximum, and a minimum value away from the mean - i.e., what are the chances you will get a value X-number of standard deviations above and below the mean. That probability is contained in to halves of the curve. Thus, we split the TOTAL p-value in half.

Question (2) is going to get a long answer. If a result is "statistically significant" it means the result you observed, is unlikely to have occurred by chance. Here is a little experiment:

Say the mean ounce in some drink is reported to be 16 ounces, with a standard deviation of 0.5 ounces. You get a drink and find that the case of soda you have, has a mean ounce of 14.25 ounces. You exclaim - "I'm being shorted!" You conduct a hypothesis test that the mean number of ounces in the soda is less than 16, with a 0.05 level of significance and find that your test statistic lies in the rejection region, with a p-value (I'm making this up) of lets say 0.02. What this means, is that IF the mean ounces in a soda can WERE 16-ounces (or in general if the null hypothesis WERE true), you would have a 2% "chance" of getting a case of soda that had an average ounce-age of 14.25 or less. Is it impossible? Of course not. Is it unlikely. Probably. Therefore, at a 0.05 level of significant, you would reject the notion that the mean ounces in a soda is 16.

Some people do not like reporting the level of significance, and instead rely on the p-value. The p-value tells you oh so much more than a sig-level: the p-value is LITERALLY the probability of getting the value you observed (14.25) - or something more extreme than that (usually its worded as " the p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true" - which always confused me until I took the time to sit down and really read my textbook). I like the p-value because it is what you have been dealing with up to the point - area under a curve. All of a sudden we then toss sig-levels at you; I suppose for the general public, p-values are less easier to understand than confidence levels, but a p-value has more intuitive information in it.