Hacker Newsnew | past | comments | ask | show | jobs | submit | fourseventy's commentslogin

It's your decision to take drugs that destroy your bodies ability to compete. It's the same as people who decide to eat way too much and similarly destroy their bodies ability to compete. They don't need to make new 'fat person's divisions for people who eat too much. If you want to compete in sports at a high level taking female hormones is detrimental to that.

So your position is that men can freely play in women's sports?

Are you serious?

A female or male is not 'assigned' anything. Just because you are confused about biological reality doesn't mean it doesn't exist.

Just because you are confused about the distinction between gender and sex doesn't mean it doesn't exist.

Birth certificates record sex, not gender, no?

Theoretically, yes. But, sometimes, they do get it wrong.

A quick Google search will will your screen with examples of men competing in women's sports and winning.

Only because they don't practice those events... If men practices those events they would be better at them

I'm not sure that is true. Beam targets flexibility in ways that no men's event does.

Men are stronger, faster, have more dense bones, have bigger lungs, bigger hands, etc, etc, etc. Men and women are different in hundreds of ways it's not just 'lean body mass'. Men are better at sports than women. Do you even live in reality? Have you ever completed in anything in your life?

Then create those divisions. Please be rational.

For what conceivable reason would you want to recreate the male and female division using a dozen or more proxies for sex instead of just using sex, to wind up with people being placed into the same buckets they would have been if you just went by sex in the first place? This seems ideologically motivated.

The controversy in these comments answers that question nicely. It seems likely that such a change would obviate these edge cases, though they may introduce their own; that seems worthy of consideration.

Really, the question seems better turned around: why use a known bad proxy for physical ability when another one might be better?


Those divisions already exists. Most sports have different leagues. There are international leagues, national leagues, regional leagues, all the way down to hobby leagues or beer leagues. If we assign everyone into a league independent of gender, the highest leagues (the most popular and most lucrative ones) will be exclusively men and women will only be present in the lower leagues. No one can want this outcome.

And then you get a situation with as many divisions as there are people and everyone get a gold medal, everyone is a winner. The true woke paradise.

Fortunately, most people don't like to live in this hell and are against clear attempts to destroy women's sports by the clueless and/or purposefully malicious activists.


Good lord. Absolutely nobody is going to watch boxing divisions based on lung size and bone density.

Did you actually think that lean mass would be a sensible way to separate divisions in a gender neutral fashion? That would, again, just result in women being unable to compete professionally in virtually any sport. They would be relegated to Division N, for some very large value of N. Competing alongside multitudes of biologically male amateurs, where nobody cares and nobody pays to watch. To even entertain this idea betrays a total lack of understanding of the matter at hand.

Right now you are acting like Elon Musk storming into the government and having 20 year olds cut everybody's budget. You may think you're coming in with fresh outsider perspective and an open minded way to look at things and improve them, but everyone actually involved in the domain can see a trainwreck in progress. It's not a good look.

I am quite certain it's not your intention, but you're really coming across as someone who hates women's sports, and doesn't want them to exist. On behalf of my wife and sister and a lot of the women I've known in a lifetime of playing sports - kindly keep your awful ideas to yourself. Women fought tooth and nail for the right to have their own professional sporting opportunities. Don't you dare try to take it away from them.


Dude, men are better than women at virtually every single sport. What are you talking about.

That doesn't make any sense because the revenue is already booked for the sale which has nothing to do with when the delivery truck actually arrives.

If it is cash based accounting, revenue and expenses are booked when the money changes hands.

If it is accrual-based acocunting, it takes place when the event legal triggering the change of ownership of goods in the transaction takes place, which depends on the shipping terms, which could be anywhere from when it is available for the buyer’s transport agent to pick up at the seller’s facility (EXW) to when it is delivered, unloaded, and at the buyers door (DDP) or any of a variety of places in between (FOB Origin, FOB Destination, and a bunch of other potential shipping terms with their own rules on when ownership—and responsibility—transfer from seller to buyer.)


Yes, to add on -

- incoterms (https://www.dhl.com/content/dam/dhl/global/dhl-global-forwar...)

- cash flow statement v. income statement (accurals)


Those diagrams of risk vs cost bared are helpful.

It's got to be one of these:

FOB Shipping Point (or Origin): Responsibility transfers to the buyer as soon as the goods leave the seller's premises. You book it when it leaves your loading dock.

FOB Destination: The seller retains risk and costs until the goods reach the buyer’s location.

The sale doesn't happen until the asset transfer occurs. Before that any cash you get from the sale is balanced by the liability to actually produce the good or refund the money. Or more likely you don't get any cash but can't record the bill as accounts receivable. It's not receivable until the transfer point is crossed.


You can account a transaction that's been placed but not fulfilled. I think when someone orders $15m of goods, you can immediately book $15m accounts receivable (asset) and $15m goods owed (liability) as soon as you have the expectation it will happen. If the transaction falls through, you delete them.

Under GAAP you cannot recognize revenue before the service is delivered or product is shipped. You can accrue revenue that is earned but not yet paid (if you are paid on Net 30, for example), but even if pre-paid you have to book that as deferred revenue, which is a liability (until you ship).

There's no deleting anything in accounting.

As someone with a partner who’s an accountant, I love seeing technologists be confidently wrong about accounting fundamentals vs. the type of technicalities that she has to deal with. Your comment highlights the absurdity of their confidence; kudos.

This is not correct. A business this big would definitely be using accrual accounting (not cash) which generally means you count the revenue when the actual ownerships transfers to the buyer. Since the truck was operated by the seller, the transfer of ownership is almost certainly counted as when the buyer receives the goods.

Rewrite the anecdote with the truck racing to the supplier to make the pickup on time.

Accounts receivable, revenue, and cash are related, but separate, accounting items.

Ya but op's anecdote is cute and funny.

That’s not why it doesn’t make sense. It doesn’t make sense because in the forward guidance you’d be able to say you expect the $15 million coming in.

Cash vs Accrual

How many F500 companies use cash accounting? How many public companies altogether?

If you make a change to the return types of a function for example you have to manually find all of the different references to that function and fix the code to handle the change. Since there are no compile time errors it's hard to know that you got everything and haven't just caused a bug.


Yes, and the downsides cascade. Because making any change is inherently risky you're kind of forced not to make changes, and instead pile on. So technical debt just grows, and the code becomes harder and harder to reason about. I have this same problem in PHP although it's mostly solved in PHP 8. But touching legacy code is incredibly involved.


Especially with duck-typing, you might also assume that a function that previously returned true-false will work if it now returns a String or nil. Semantically they’re similar, but String conveys more information (did something, here’s details vs did(n’t) do something).

But if someone is actually relying on literal true/false instead of truthiness, you now have a bug.

I say this as a Ruby evangelist and apologist, who deeply loves the language and who’s used it professionally and still uses it for virtually all of my personal projects.


The best perspective I've seen is that statically typed enforcement is basically a unit test done at compile time.


Alan Kay's argument against static typing was it was too limited and didn't capture the domain logic of the sort of types you actually use at a higher level. So you leave it up to the objects to figure out how to handle messages. Given Ruby is a kind of spiritual ancestor of Smalltalk.


the problem is that nobody listened to Alan Kay and writes dynamic code the way they'd write static code but without the types.

I always liked Rich Hickey's point, that you should program on the inside the way you program on the outside. Over the wire you don't rely on types and make sure the entire internet is in type check harmony, it's on you to verify what you get, and that was what Alan Kay thought objects should do.

That's why I always find these complaints a bit puzzling. Yes in a dynamic language like Ruby, Python, Clojure, Smalltalk you can't impose global meaning, but you're not supposed to. If you have to edit countless of existing code just because some sender changed that's an indication you've ignored the principle of letting the recipient interpret the message. It shouldn't matter what someone else puts in a map, only what you take out of it, same way you don't care if the contents of the post truck change as long as your package is in it.


That's a terrible solution because then you need a bunch of extra parsing and validation code in every recipient object. This becomes impractical once the code base grows to a certain size and ultimately defeats any possible benefit that might have initially been gained with dynamic typing.


>then you need a bunch of extra parsing and validation code in every recipient object.

that's not a big deal, when we exchange generic information across networks we parse information all the time, in most use cases that's not an expensive operation. The gain is that this results in proper encapsulation, because the flipside of imposing meaning globally is that your entire codebase is one entangled ball, and as you scale a complex system, that tends to cost you more and more.

In the case of the OP where a program "breaks" and has to be recompiled every time some signature propagates through the entire system that is significant cost. Again if you think of a large scale computer network as an analog to a program, what costs more, parsing an input or rebooting and editing the entire system every time we add a field somewhere to a data structure, most consumers of that data don't care about?

this is how we got micro-services, which are nothing else but ways to introduce late binding and dynamism into static environments.


> when we exchange generic information across networks we parse information all the time

The goal is to do this parsing exactly once, at the system boundary, and thereafter keep the already-parsed data in a box that has "This has already been parsed and we know it's correct" written on the outside, so that nothing internal needs to worry about that again. And the absolute best kind of box is a type, because it's pretty easy to enforce that the parser function is the only piece of code in the entire system that can create a value of that type, and as soon as you do this, that entire class of problems goes away.

This idea is of using types whose instances can only be created by parser functions is known as Parse, Don't Validate, and while it's possible and useful to apply the general idea in a dynamically typed language, you only get the "We know at compile time that this problem cannot exist" guarantee if you use types.


> The goal is to do this parsing exactly once, at the system boundary

You are only parsing once at the system boundary, but under the dynamic model every receiver is its own system boundary. Like the earlier comment pointed out, micro services emerged to provide a way to hack Kay's actor model onto languages that don't offer the dynamicism natively. Yes, you are only parsing once in each service, but ultimately you are still parsing many times when you look at the entire program as a whole. "Parse, don't validate" doesn't really change anything.


> but under the dynamic model every receiver is its own system boundary

I'm not claiming that it can't be done that way, I'm claiming that it's better not to do it that way.

You could achieve security by hiring a separate guard to stand outside each room in your office building, but it's cheaper and just as secure to hire a single guard to stand outside the entrance to the building.

>micro services emerged to provide a way to hack Kay's actor model onto languages that don't offer the dynamicism natively

I think microservices emerged for a different reason: to make more efficient use of hardware at scale. (A monolith that does everything is in every way easier to work with.) One downside of microservices is the much-increased system boundary size they imply -- this hole in the type system forces a lot more parsing and makes it harder to reason about the effects of local changes.


> I think microservices emerged for a different reason: to make more efficient use of hardware at scale.

Same thing, no? That is exactly was what Kay was talking about. That was his vision: Infinite nodes all interconnected, sending messages to each other. That is why Smalltalk was designed the way it was. While the mainstream Smalltalk implementations got stuck in a single image model, Kay and others did try working on projects to carry the vision forward. Erlang had some success with the same essential concept.

> I'm claiming that it's better not to do it that way.

Is it fundamentally better, or is it only better because the alternative was never fully realized? For something of modern relevance, take LLMs. In your model, you have to have the hardware to run the LLM on your local machine, which for a frontier model is quite the ask. Or you can write all kinds of crazy, convoluted code to pass the work off to another machine. In Kay's world, being able to access an LLM on another machine is a feature built right into the language. Code running on another machine is the same as code running on your own machine.

I'm reminded of what you said about "Parse, don't validate" types. Like you alluded to, you can write all kinds of tests to essentially validate the same properties as the type system, but when the language gives you a type system you get all that for free, which you saw as a benefit. But now it seems you are suggesting it is actually better for the compiler to do very little and that it is best to write your own code to deal with all the things you need.


> I think microservices emerged for a different reason: to make more efficient use of hardware at scale.

Scaling different areas of an application is one thing. Being able to use different technology choices for different areas is another, even at low scale. And being able to have teams own individual areas of an application via a reasonably hard boundary is a third.


Is that a common issue? I guess I'm having a hard time imagining a scenario that would (a) come up often and (b) be a pain to fix.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: