Hacker Newsnew | past | comments | ask | show | jobs | submit | gambling8nt's commentslogin

Half a century--two human generations--is not nearly enough time for evolutionary pressures to have the sort of effect you are talking about. Least of all in a society of abundance in which the majority of members reproduce.

Instead, I'd suggest you look at the various changing environmental factors for an explanation of these phenomena: BPA in plastics (http://en.wikipedia.org/wiki/Bisphenol_A), the growing use of soy in human diets (with its attendant phyto-estrogens), and the growing quantities of synthetic human estrogen in the environment (already known to have effects on fish, see for instance http://www.seattlepi.com/local/124939_estrogen04.html).


Sure, phyto- and xeno-estrogens are strong suspects in this case. But here's another data point:

Measure the testosterone level in your bloodstream.

Then for a few weeks start doing heavy squats (weight lifting) every other day, go car racing, skydiving, etc.

Now measure T again. See the difference?

There are all sorts of things like that. The bottom line is, the more secure the environment, the less the need for men to be "men".

Also, I was not implying that the new evolutionary pressures have already made changes, I was just saying the changes are being made now - but how long before they will become visible, I have no idea. Probably not tomorrow.


I agree that lack of physical activity is another likely factor for changing hormone levels and their secondary effects.

However, I know of no reason to think that these changes are heritable, or that naturally low T is actually a reproductive advantage in our society. (Indeed, given that we have inverted the more typical historical trend of the wealthy out-reproducing the less wealthy, and the selection of low-T for wealth here asserted, one might expect that this society actually reflects reproductive pressure against low-T, rather than in favor of it.)


The launch vehicles themselves may not be that fragile, but payloads often are. Multimillion-dollar satellites can be damaged or completely non-functional from effects as small as being dropped 1 meter (see http://www.spacetoday.net/Summary/2230 , for example).


Dropping 1m onto a hard floor is a lot worse than being slowly hoisted from horizontal to vertical.

If you disagree, I'll ride on the hoist and you can fall 1m headfirst onto concrete.


If the bolts holding the payload in place shear away under the lateral force of supporting the payload, the payload may fall and hit the hard inside of the launch vehicle, making the two events quite similar. (Indeed, the original drop I linked to occurred when the satellite was not correctly bolted to a table on which is was going to be tipped to one side in order to verify that it could withstand the stress of being in that physical orientation.)

That said, ugh's comment suggests my reasoning (as an explanation for vertical assembly) is either incorrect, incomplete, or both.


If your erection system fails it doesn't matter how it works, "bad things" are a likely outcome.


that's what she said


Ariane rockets are also assembled vertically [1]. Since assembly and adding the payload are two discrete steps, done in two different buildings [2], both of the times vertically, that tells me that there is more to it than the payload.

[1] http://www.arianespace.com/spaceport-ariane5/launcher-integr...

[2] http://www.arianespace.com/spaceport-ariane5/final-assembly-...


The main reason is merely tradition (really). Most systems are designed as evolutions of older systems, if the first system used vertical assembly, then so will the latest.

Note that the Russians have been doing horizontal to vertical launch vehicle assembly for decades (with manned and unmanned launches). There's no engineering reason why you must prefer one method or the other, it's a choice.

http://www.globalsecurity.org/space/world/russia/images/usta...


This does indeed suggest that there may be other factors at work. I could try tossing out other ideas, but they would be little more than hypothesis. (Bending during the tipping process damaging the joints in multi-segment launch vehicles? This might explain the difference with SpaceX's Falcons, which I believe do not have joints with O-rings.)


As noted in the how it works link (posted twice on this page, or at the bottom of your chart), they've assumed you have the average emission levels for all sorts of other activities, such as home telephone service, car purchases, apparel purchases, and, of course, alcohol and tobacco.


Got to love the US Consumer Expenditure Survey...


About 30% of first marriages and 40% of second marriages fail within the first ten years (and, as the parent noted over 50% of first marriages fail-- see table 41 and the appendix table II of http://cdc.gov/nchs/data/series/sr_23/sr23_022.pdf ). About 20% of each fail within the first five years.

Ten years is not enough time for children to be conceived and raised to maturity in a two-parent home (a situation that significantly improves probable education and life outcomes for a child). Ten-year term contracts are too short to secure the primary purpose of marriage--forming families to raise children--and too long to make a large dent on the divorce rate.


To elaborate on cperciva's (correct) point:

On a projective plane a parabola contains a single point at infinity, connected (both figuratively and topologically) to the two open ends. A hyperbola contains two distinct points at infinity each connected to one end of each of the two usual components.


It's not like the IPCC was trying to hide this. If anything, the entire way this scenario played out should be a reassurance that the IPCC and cooperating scientists are acting in good faith. They're not trying to "cover up" failures; they're acting on the established scientific process.

Actually, the linked research came from a survey commissioned by the WMO, not the IPCC...the IPCC fabricated the original claims that there would be more and more severe hurricanes due to global warming, according to the original article, with "no science so [sic] substantiate them."

One of the authors of this research, who resigned from the IPCC in protest over the original hurricane claim, indicated in 2005, "All previous and current research in the area of hurricane variability has shown no reliable, long-term trend up in the frequency or intensity of tropical cyclones." It has taken the past several years to compile data to change this stance, and the results contra-indicate the IPCC's original claims.

There are, undoubtedly, scientists who act largely in good faith based upon the existing climate data; the limited amount of data that I have seen indicates that there are aspects of global climate change that are quite real. The IPCC, however, is a political organization that appears transparently to be acting (or, at least, to have acted) in bad faith--between its baseless claims regarding hurricanes, its inaccurate estimates and later denial of the rate of melt of glaciers in the Himalayas, and various dubious data practices (failure of some members to comply with freedom of information requests, "lost" data and storing only reduced results of data, and selectively ignoring the tree ring data, to name just a few examples).

This is not the story of a claim being well-founded, but being corrected by advances in data or theory. This is the result of a political claim based on little or no evidence being examined by an outside body and found inaccurate.


The original article indicates fewer, but slightly stronger.


As a fellow believer in not bothering to go to class, my feeling on the matter was always that I would go to class if and only if it was at least one of (1) more efficient than reading the textbook, or (2) substantially different from reading the textbook. These were both rare properties.


It looks like there are too many "unspecified" articles to learn much from this visualization, other than a moderate decrease in the number of articles in your "startup" category, supplanted largely by "ask" topics--a trend that largely leveled off early in the x-axis on this graph (which would be dramatically more useful if it had some amount of real-time benchmarks to give some sense of scale).


See the text in the article about the 'unspecified'.

As for the scale, it doesn't get much more precise than this, the only concession to legibility is to stretch the graph horizontally because otherwise it would be only 138 pixels wide, vertical is very close to one posting per pixel.

As the volume of postings on news.ycombinator increases due to increased traffic to the site the graph will stretch more further to the right.

This could be counteracted by changing the algorithm to 'bin' more posts to the right hand side to get for instance one month per bin, but in practice the outcome would be the same, you'd just have another weighting to do to get the Y-axis of the bins to line up.


For unspecified, possibly could run the unspecified URLs through bit.ly to get the meta description/keywords.

Example: (Is Amazon EC2 oversubscribed)

http://api.bit.ly/info?version=2.0.1&hash=83VrYk&log...


What Zed is saying when he notes that meta-statistics are normal is that, thanks to the central limit theorem, the average and standard deviation of data sets collected from the same underlying probability distribution (with convergent average and standard deviation) will tend to be normally distributed (in the limit approaching infinite sample size), even if the underlying system behavior is far from a normal distribution. In practice you work with finite sample sizes, so an underlying distribution sufficiently far from normal will result in a non-normal distribution of meta-statistics--but in most applications, these sort of pathological distributions are largely irrelevant.

Take our example of looking at response time for loading a web page. There is some finite point (say, 10 sec) beyond which we no longer care how much longer it takes. So instead of considering the distribution of response times t, we consider the distribution of min(t, 10 sec). This distribution only has support over a finite interval, so its meta-statistics normalize rapidly as you increase the number of trials.

Using this will under-report the actual standard deviation in the response time (which might, as you say, not even converge), since we've eliminated extremely low probability events with very high response time, but as a practical matter this is largely irrelevant--if these events are high enough probability for us to care we'll notice them anyway. The point of this exercise is not to perfectly ascertain the underlying distribution of t, it is to develop useful predictions for system behavior in practice.


The calculated standard deviation from any finite sample size of a long tailed distribution (e.g. Pareto with alpha <= 2) will be off by a factor of infinity.The point is that, not only is the standard deviation irrecoverable in this case, but it's hardly the figure of merit if you do know it.


Except that in real life there are no distributions with support outside of a finite interval in space or time; there's always some point when you stop running the system...if some packets don't arrive by that point, you generally don't care how much longer they would have taken.


Except that the sampling distribution of the standard deviation is a scaled chi-square, not a normal. The central limit theorem is only for the mean, not any statistic that you might dream up. It's trivial to think of many that would not converge even with a windsorised response time.


Chi-squared distributions are well approximated by normal distributions close to the mean.

The point is not that arbitrary statistics will necessarily always be perfectly behaved (or even well behaved) on sampling data--it's that to make reasonably accurate predictions of system behavior, under certain practical conditions, these statistics are well-behaved, and an inexperienced statistician (as most people are) is less likely to make a gross error.


Practical conditions not including routing networks and the stock market, you may wish to add...


Real life situations have finite cutoffs in behavior that remove many pathological problems with certain statistical models.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: