Just a warning, parts of that page may be disturbing and or distressing to some. The references to animal testing are particularly gruesome (to me anyway).
I feel that typescript being only 2.0, should be in a position to break things once in a while.
If you pay too much attention to keeping legacy code 'alive' you end up complicating matters with either compiler-flags all over the place or redundant API's i.e win32 api (old+new+newer versions of the same function).
The fact that the flag is global could actually slow down adoption of the new type system.
Let's say I want to use the new type system, but some library I depend on hasn't been updated. If I enable the new type system, I'll get both false positives and negatives when type checking.
If a library writer updated to the new type system, they'd break compatibility with callers that haven't updated. The safest option would be to maintain two almost-identical copies of the library -- one for the old type system and one for the new one.
A better option would be to allow specifying the type system mode at the top of each file (like "use strict"). This allows me to use the new type system even if not all my libraries have updated yet. This also allows libraries writers to use the new type system features without breaking existing users.
If 12megapixels can produce 10 to the power 86696638 images, and we came up with a way of enumerating those images, could we then build a function that given anyone of those images return the index of that image within reasonable time with current hardware. ie. "you have just taken 3999999987493th image"?
A friend of mine made a tool that did exactly this, but with Haikus instead. He had a dictionary of syllables, and then just iterated through the syllables (following the rules of a haiku). You can type in a haiku and find its index, or just iterate through the indices to see the (mostly nonsense) generated haikus.
Yes, but it wouldn't save any space. As a thought experiment, think of it this way:
How would we enumerate all these several gazillion image possibilities?
Well. Let's say number one is all black. Every pixels and every channel is all zero in its value. And let's say the last image to be enumerated is all white. 255 for each pixel and each channel.
Every conceivable image is created in between these two ends. For example, image two is all black, but the last pixel has a value of 1 instead of 0 for its value channel.
Image 1840274917 has pixel 27581 slightly reddish.
Hey, wait a minute, you've just created an image format for describing the data within the image! The only space you're saving is that (given this format) you save space on darker images, because they're likely lower in the sequence.
But that's only because this specification demands that each image be the same exact size and can make assumptions based on that. A lossless format like PNG would be able to perform much better over a wider range of images. (Eg all white will be huge in our system, but cheap in PNG)
not only could you enumerate all images, you can also enumerate all books. the library of babel does this and youre looking at the particular page where this exact paragraph is written, for hacker news by dackerman, no less.
My first thought. Then looking at what Microsoft is doing by offering compelling tool for cross platform development, then there is a possibility that Google is working to offer its own Xamarin like solution. Can only dream it will be one single GUI API for developing apps for Android, iOS, Windows, WebAssembly and Linux desktops using Swift. If Google has this cross platform GUI API, then many are likely to abandon Cocoa API. Google probably sees API as platform and not language.
What was the authors methodology?