$250k from immediate family, if said fuzzy conditions don't confer any ownership or repayment? I'd say just barely yes (it's basically a gift to you at that point, which is then your money)
$250k from a third cousin in return for equity? No.
Being bootstrapped isn't an ungameable category, but it is a fairly unambiguous one IMO.
It's not that 1967 design is unsafe, nor that the MAX design is unsafe.
The problem is that 737 pilots were allowed to fly MAX max without recertification. A recert would cost a lot for the airlines, so Boeing pushed the idea that MAX is a drop-in replacement for the vanilla 737.
The whole idea was to make a more efficient version of the 737 with no substantial changes in flight characteristics. As it turns out, the change in behaviour is substantial.
The foul play is Boeing pushing the regulatory agency in the US around, and the agency succumbing to it.
I think it is reasonable, until shown otherwise with a new type certification, to presume the MAX design of larger fans pushed forward is unsafe. Flight characteristcs are different, though currently covered up by software.
Kodak actually tried hard to bank on digital and made great digital sensors, but the world went too fast with digital. Their projections were off by a number of years. They thought film was going to gradually decline, not drop off a cliff.
Cython is much easier to implement than people realize. A few import statements and a few types, and then you're good to go. I urge people to give it a try. The speedups can be dramatic.
It certainly is good, but I've never found a great dev setup. It seems like you have to force a full recompile of all cython in the project any time you make a change. At least that was the stated process for the statsmodels library. And it was always a little unclear whether I was running the latest code or hadn't yet actually compiled it.
I've never used cython before. Is it 1-to-1 with standard Python? If so, can you do development using the regular Python binary, and leave the compilation step as a pre-commit operation?
The workflow with Cython is that it produces native python modules (e.g. .so shared objects / .dll dynamic link libraries) that the python interpreter can import.
So if you change some of your Cython code, for it to be used at runtime you need to invoke the Cython build tools to rebuild the new version of your python native module.
I usually use Cython for a small core of compute heavy operations, and leave the rest of the project as pure python. That way I only need to rebuild Cython code if I change something inside that small core.
> Is it 1-to-1 with standard Python?
Not for the best speedups, no.
E.g. you might be able to get a modest speedup, say 50%, taking a pure python file, renaming it to *.pyx, and getting Cython to compile it. But that's not why I use Cython. I use it when I have compute-heavy code that I want to run at native speed (think matrix-vector product type stuff), by carefully rewriting in Cython, thinking carefully about memory allocation, data structures (prefer C arrays!) and performance, it is fairly achievable to get a 500x speedup.
Cython relies on you writing specialised Cython code that is quite close to C code -- strongly typed Cython variables work like statically typed C variables, not dynamically typed Python names. You end up with Cython code that cannot be executed as if it were normal Python code by a python interpreter.
But, Python code can usually not be executed very efficiently, whereas Cython can translate small loops of strongly-typed numeric code into small loops of strongly-typed C code, which can often compile to very small loops of native CPU instructions, which then run blazing fast.
Under the hood, Cython works by translating the not-quite-python code into C code that uses the python interpreter's C extension API. Then it compiles the C code into a python native module using a C compiler.
Thanks for pointing this out. I didn't realize this until I read further into this thread and started looking at optimized cython projects. Now I understand what you're talking about.
It's not 1-1. Cython can compile regular Python, but you only really get significant speedups when you declare the types of things using cython-specific syntax.
Cython is used very widely in the Python ecosystem. It's rarely mentioned precisly because it's so pervasive.
But it's also not the same thing - AOT doesn't always mesh well with how dynamic Python is, but JIT can take care of all the more advanced scenarios with runtime-loaded or runtime-generated types and code.
I mainly programmed in Cython for a few years, and never came across standard Python code it wouldn't run. It's just usually only a few times faster than Python using unmodified Python, as opposed to 50x-200x faster or more with type annotations, GIL turned off for bottleneck functions, @ decorators etc.
I loved it, being able to float between Python and C in a program, writing in Python whatever was more convenient in Python, in C whatever was more convenient in C, or anywhere in between.
"While pure Python scripts can be compiled with Cython, it usually results only in a speed gain of about 20%-50%."
It usually does work on individual small functions, in my experience.
I've seen 2x improvement on string processing code (cleaning text for input to a ML model) by doing nothing other than sticking `%%cython` at the top of a notebook cell.
I don't know if this technique scales to other applications; probably not. But Cython syntactically is a superset of Python.
That's misleading as normal type annotations don't give you any performance benefit at all and you may need to do a lot more than just add type annotations.
I think it's far more accurate to say that Cython is a programming language of its own that is a hybrid of Python and C++, that happens to produce CPython extension modules when compiled.
The performance benefits are really achieved by incrementally changing your Python code to something that looks a lot more like C(++).
This is also reflected in the Cython documentation which literally mentions the "Cython language".
Cython is great as an alternative to writing C extension modules for performance reasons or to creating bindings to libraries written in C or C++. It's not so great just to make Python applications faster as it's not fully compatible[1].
> The performance benefits are really achieved by incrementally changing your Python code to something that looks a lot more like C(++)
I completely agree. Cython can become quite attractive if your alternative is "write a python extension library by hand in C / C++". I first started using Cython after doing exactly that, writing my extension library in C, then realising that Cython might save a lot of work in generating the bindings and packaging/distribution -- it did, and it ran at exactly the same speed as my pure C library with hand crafted python bindings. After that I've been pretty excited about Cython.
If you've got a python program that needs to do a core of compute-heavy work, if you were to optimise this by writing a C / C++ library for python to use, the work would be: (i) think hard about how the library design will enable performance, (ii) implement that high performance library in C / C++ , (iii) figure out the interface so that python can call into the library, and (iv) figure out how to package and distribute the library so it can be used by python programs.
Cython doesn't really help with parts (i) designing for performance or (ii) writing that high performance code. But it helps a lot with parts (iii) and (iv), generating Python bindings and producing wheel archives that can be managed by existing python package management tooling.
If family (with fuzzy conditions) ponied up $250k, is it bootstrapped?