Hacker Newsnew | past | comments | ask | show | jobs | submit | nerdponx's favoriteslogin

For screws that go into wood (or thereabouts; sheet metal screws can count here too), Robertson is first on my list. The slightly tapered interface fits snugly and tends to hold screws onto a driver bit very well while providing good angular alignment, and it can do all of this without needing magnets. It can also allow for a bit of angular misalignment when necessary, and the chonky squared-off corners and flats resist cam-out/stripping rather well. I'm not even Canadian and I use Robertson wherever I get a chance.

Second general choice is hex-head screws. These snap into inexpensive shallow magnetic driver bits with a satisfying click. But they're ugly and kind of rough once installed, and they're not available in flat-head versions. (Hex-head shoulder screws are my only choice for drill-point screws: They maintain positive angular alignment by default and that's crucial for drilling holes in metal.)

Torx is fine too, I suppose. I don't have an anti-Torx rule in my workshop, but I try to avoid buying them. They don't tolerate angular misalignment as well as Robertson, and they don't maintain positive angular alignment like a hexagonal shoulder screw does, and they don't stay put with friction like Robertson or snap onto a magnetic driver bit like hex. There's lots of stuff Torx is not very good at doing.

Theoretically, I can probably put more torque into Torx than any of the other options listed here, but I don't find that to be a practical advantage in this kind of application: When I can already drive a Robertson screw through a chunk of old-growth wood, I don't need to improve that part.

---

For machine screws, I've standardized on stainless steel button-head socket cap screws as a first choice. I've got a deeper selection of them than a good hardware store does, and they're all sorted. (Why stainless instead of graded? Because I don't want them to rot, whether sitting in a bin for decades or used outdoors or whatever and I do not want to stock more than one kind. My fastener collection is crazy enough without also multiplying it by different grades.)

There's other stuff, though, too. For instance: Regular socket cap screws have their place -- it just isn't first place.

And at the scale of things I build, I mostly use M3, M4, and M5.

I buy regular-length Bondhus non-ball hex keys in simple bulk packaging to fit my standard M3, M4, and M5 machine screws. They're very high quality tools, and they're rather inexpensive in bulk. And thus, it is no big deal if I misplace one while I'm working -- I've got more on-hand, and I'm not afraid to get more coming if stock gets low.

Bondhus makes a decent-quality hexagonal screwdriver, too, and these are nice to keep around because the design is not like the gigantic T-handled abortions that so many other manufacturers sell: It's just screwdriver-shaped, and it works just like a familiar screwdriver does -- but for socket-cap screws! This was the discovery that allowed me to completely abolish Phillips screws forever from my workshop.

I've also got sets of hex keys -- of course I do. Long, ball-end, plain, whatever. I try to avoid cornering myself into a situations where these non-regular variations would ever begin to be useful to begin with, and the long versions are mostly only useful for disassembling stuff that someone else had overtorqued. (But overtorqued fasteners are different rant.)


> Who you know is more important

Networking is all about who knows you, not who you know.


Peter Norvig here. I came to Python not because I thought it was a better/acceptable/pragmatic Lisp, but because it was better pseudocode. Several students claimed that they had a hard time mapping from the pseudocode in my AI textbook to the Lisp code that Russell and I had online. So I looked for the language that was most like our pseudocode, and found that Python was the best match. Then I had to teach myself enough Python to implement the examples from the textbook. I found that Python was very nice for certain types of small problems, and had the libraries I needed to integrate with lots of other stuff, at Google and elsewhere on the net.

I think Lisp still has an edge for larger projects and for applications where the speed of the compiled code is important. But Python has the edge (with a large number of students) when the main goal is communication, not programming per se.

In terms of programming-in-the-large, at Google and elsewhere, I think that language choice is not as important as all the other choices: if you have the right overall architecture, the right team of programmers, the right development process that allows for rapid development with continuous improvement, then many languages will work for you; if you don't have those things you're in trouble regardless of your language choice.


As someone who worked on Intel's phone chip: we definitely didn't win it. We fucked it up twelve ways to Sunday. Why: giant egos. There were turf wars between Austin, Santa Clara and Israel over who would design it, and the team that won out had long since lost its best principle engineers and had no clue how to spin the architecture to meet the design win. Otellini's hindsight hedge is pure spin: we knew the landing zone, we just didn't know how to get there. And the aforementioned turf war guaranteed we didn't get access to other teams' talent. I'm bitter because it was a really fun team when I moved from Motorola to Intel Austin, and then it just corroded over political battles.

Based on the Spotify revenue model my favourite artist might make a few dollars off me over our entire lifetimes. This is not enough to sustain all but the biggest artists. I am happy to contribute more to smaller artists I appreciate. Seeing them on tour, buying CDs, buying merch, patreon type models and even straight up contributions on paypal and venmo. I've been buying a couple hundred bucks worth every bandcamp Friday since the start of covid. It is not just Spotify, none of the streaming fremium music subscription services provide me an effective way to support the artists I like.

Cory Doctorow points the way in Information Doesn’t Want to Be Free talking about future models for artists to make a living. It might superficially seem like the Spotify CEO is saying the same things but this is just cover for the gatekeepers to keep an unconscionable share for themselves.


The main problem is that current antitrust law doesn't clearly prohibit anti-competitive behavior like this. Modern antitrust law has largely been created by judges (often without any economic training) interpreting early 20th century laws. This is a great summary of how regulations could be adapted to stop this sort of activity: https://www.yalelawjournal.org/pdf/e.710.Khan.805_zuvfyyeh.p...

In short, competition needs to be increased (i.e. more platforms). For example, Amazon wouldn't be able to pull so many shenanigans if users could export all shopping data to jet.com and easily shop on multiple platforms.

The other solution is updated regulations that follow in the footsteps of historical frameworks of antitrust in industries like utilities where it's accepted that monopolies are efficient, but strict measures are put in place to limit abusing market power.


Shameless plug, I made hobbes and used it in high volume trading systems at Morgan Stanley:

https://github.com/Morgan-Stanley/hobbes

It’s kind of a structurally-typed variant of Haskell, integrates closely with C++, produces very fast code.


I wonder what effect this will have on our culture long term. It would be unfortunate if Glass' determination ends being seen as the ground truth.

I have noticed a certain phenomenon where people who are bad at something read books about it to compensate, and then having read books on the subjects start considering themselves experts on it and preach to others, closing the loop from descriptive to prescriptive.

Edited: Added second paragraph


Only a beginner can teach a beginner. First you struggle. Then you think you know it all. Then you forget what was so hard. Then you master it. And then you realize you know very little.

  This is the TXR Lisp interactive listener of TXR 233.
  Quit with :quit or Ctrl-D on empty line. Ctrl-X ? for cheatsheet.
  1> (countq #\h "fhqwhgads")
  2
  2> [[callf equal identity reverse] "palindrome"]
  nil
  3> [[callf equal identity reverse] "racecar"]
  t
  4> [(opip (mappend [iff (op > (countq @1 @@1) 1) list] @1) uniq) "applause"]
  "ap"
  5> [(opip (mappend [iff (op > (countq @1 @@1) 1) list] @1) uniq) "foo"]
  "o"
  6> [(opip (mappend [iff (op > (countq @1 @@1) 1) list] @1) uniq) "baz"]
  ""
  7> [[mapf equal sort sort] "teapot" "toptea"]
  t
  8> [[mapf equal identity sort] "apple" "elap"]
  nil
  9> [(op mappend [iff (op eql (countq @1 @@1) 1) list] @1) "somewhat heterogeneous"]
  "mwa rgnu"
  10> [(do and (= (len @1) (len @2)) (search-str `@2@2` @1)) "foobar" "barfoo"]
  3
  11> [(do and (= (len @1) (len @2)) (search-str `@2@2` @1)) "fboaro" "foobar"]
  nil
  12> [sort '#"books apple peanut aardvark melon pie" : len]
  ("pie" "books" "apple" "melon" "peanut" "aardvark")
Yawn ...

  13> (perm "xyz")  ;; non-recursive, lazy, written in C.
  ("xyz" "xzy" "yxz" "yzx" "zxy" "zyx")

It looks way too long in the number of different tokens.

  This is the TXR Lisp interactive listener of TXR 233.
  Quit with :quit or Ctrl-D on empty line. Ctrl-X ? for cheatsheet.
  1> (defun merge-hashes (hlist)
       (let ((hout (hash)))
        (each ((h hlist))
          (dohash (k v h) (push v [hout k])))
       hout))
  merge-hashes
  2> (merge-hashes '(#H(() (a 1) (b 2) (c 3)) #H(() (b 5) (c 7) (d 8))))
  #H(() (c (7 3)) (b (5 2)) (a (1)) (d (8)))

Now here is that function fully code golfed with unnecessary spaces removed and all symbols one character long:

  (defun m(x)(let((o(hash)))(each((h x))(dohash(k v h)(push v[o k])))o))
That's down to 70 characters: only about double the K3 size.

If defun and the other built-ins were one character long, it would be down to 50 chars:

  (d m(x)(l((o(H)))(e((h x))(D(k v h)(p v[o k])))o))
Now we have a fair comparison where we have leveled the field, eliminating the difference due to whitespace elimination and token condensation.

Though still significantly by raw character count (35 versus 50), there is less clutter in it. Also, it defines the name m whereas the {.+...} syntax needs a few more characters to define a function, I think.

But wait; that's far from the shortest code necessary. What I wrote is a decently efficient way of doing it which iterates the input hashes imperatively and builds up the output. It can be done in other ways, like this:

  1> (defun merge-hashes (hlist)
       [group-reduce (hash) car (op cons (cdr @2) @1) [mappend hash-alist hlist]])
  merge-hashes
  2> (merge-hashes '(#H(() (a 1) (b 2) (c 3)) #H(() (b 5) (c 7) (d 8))))
  #H(() (d (8)) (c (7 3)) (b (5 2)) (a (1)))
We can obtain a flat "assoc list" of all the key value pairs from all the dictionaries by mappend-ing them through hash-alist. hash-alist retrieves an assoc list from a single dictionary, and we map over that, appending these together.

Then we can group-reduce the assoc list; group-reduce populates a hash by classifying elements from an input sequence into keys, which denote individual reduce accumulators. So all the a elements are subject to their own reduce job, so are the b elements and so forth. The reduce function is (op cons (cdr @2) @1); an anonymous function that takes the cdr (value element) of each pair (that pair coming as argument 2), and conses it onto the accumulator (coming in as argument 1), returning the new accumulator. Since nil is the empty list, and fetching a nonexistent hash key yields nil, the accumulation can boostrap implicitly from an empty hash.

Now if we were to remove all unnecessary whitespace and give a one-character name to everything, we now get:

  (d m(h)[R(H)c(o n(r @2)@1)[M L h]])
That shows there is a potential to get this down to 35 characters, with suitable function/operator and variable names.

Moreover, these 35 characters are yet easier to parse because all the parentheses/brackets are there: they comprise 14 out of the 35 characters! And we still have 5 spaces. So 19 characters out of the 35 are punctuation having to do with shaping the syntax; 16 are semantic.

If you remember that R is group-reduce, you know that its arguments are (H) c (o ...) and [M ...]. There is no guesswork about what goes with what.

The above K expression is actually verbose because it doesn't use very many characters for shaping the syntax tree. Most of the characters you see do something.

Languages like K leverage their terseness not just from compression of the input notation, but from the semantics of the operations. But they do not have a monopoly in semantics. Good semantics that enables terse programming can be found in languages that don't use terse notations at the token level.

The K3 example we have here is not leveraging good semantics; if that's the best that can be done, it suggests that k doesn't have a well-rounded library of operations for concisely manipulating dictionaries. (Could it be that the insistence on one-character names creates a pressure against that?)

It's better to have thousands of functions with descriptive names, and then be able to choose the best ones for expressing a given problem tersely, than to reach for one-letter naming as the primary means for achieving terseness, and then try to pull a one-size-fits-all library within that naming convention.


Other, related points:

- With heavy tails, the sample mean (i.e. the number you can see) is very likely to underestimate the population mean.

- With heavy enough tails, higher moments like variance (and therefore standard deviation) do not exist at all -- they're infinite.

- Critically: With heavy tails, the central limit theorem breaks down. Sums of heavy-tailed samples converge to a normal distribution so slowly it might not realistically ever happen with your finite data. Any computation you do that explicitly or implicitly relies on the CLT will give you junk results!


Michael from f.lux: This research is from a very talented group of researchers, but it is unclear if it will translate from nocturnal mice to humans (as others have said). The evidence in humans is mixed, but it either shows no effect or a tendency in the other direction. There is a study in the same issue (Spitschan) that says there is no effect in humans.

1. First, the study does not question the contribution of melanopsin (the blue-cyan opsin that got everyone talking about "blue light") - it asks a more subtle question: when you hold melanopsin stimulation constant, what does the remaining light do and why? Here they are finding whether the cones oppose or boost melanopsin based on color signals. But regardless of how this works in humans, we should still expect bright-enough blue light at night to be stimulating, because of the response due to melanopsin.

2. Holding the melanopic portion of a light constant is not something we usually do. For most lights we have today, the "blue" lights would be considerably dimmer than the "yellow/red" ones if we did this. When we compare lights of equal visible brightness, the yellow ones are known to have less effect on human melatonin suppression [Chellappa 2011].

3. The evidence in humans is mixed, but it actually goes the other direction (saying blue is more stimulating), or there is no clear effect. In the same issue, a study on humans by Spitschan found a negative result on whether or not S-cone contrast has an effect: https://www.cell.com/current-biology/fulltext/S0960-9822(19)...

3b. Other research (in monochromatic and polychromatic light) finds that humans are more sensitive to blue light than melanopsin would suggest. See a list below.

4. We're all still trying to explain how the transition to dusk is blue/purple, while our own lighting doesn't do that. We have built our lighting to be relatively bright, but warm. It is not "natural" to extend the day like we do, but it likely would not help anything to make the lights more blue, unless they were quite a lot dimmer, or used very novel spectra.

Here is a list of references for the evidence +/- blue sensitivity (not melanopsin) in humans:

1. The Thapan study from 2001 indicates extra blue-light sensitivity in addition to melanopsin. Lights are seen for a half hour at night. https://doi.org/10.1111/j.1469-7793.2001.t01-1-00261.x

2. The Spitschan study from this same issue of Current Biology says there is no effect in either direction when comparing 83x S-cone contrast. The lights here are "pink" (which has a lot of blue) and "orange" which has very little. https://www.cell.com/current-biology/fulltext/S0960-9822(19)...

3. The Brainard 2015 study compares 4000k to 17000k lights: at the same "melanopic" level the 17000k lights do a lot more melatonin suppression: https://jdc.jefferson.edu/cgi/viewcontent.cgi?article=1081&c...

4. There is one important study in humans (Gooley 2010) that says we can be more sensitive to 555nm light after two days in dim light, so that mirrors this study. But this is not exactly comparable to the study cited here due to sensitization: it stands on its own due to the duration of the experiment.

It would be interesting if we could find some "truth" to the idea that twilight colors affect human circadian entrainment - it has been a recurrent idea for many years. We finally have the technology to target melanopsin separately from the S-cone (see Spitschan's work for an example).

For the press these results get, you'd be surprised that there has been extremely little research funding for most of these things in the last ten years. In a way, I hope that mixed results like these might help! How light affects us at lower levels, and how different we are from each other is not "solved" at all, so there is still a lot of work to do.


I used to work for a Startup considered by many as tech company. Our devs cared a lot about code quality, visited meetups, engaged in the community etc. devs usually stayed for at least 3-5 yrs.

In contrast, there was a company that almost did the same business as we did. But, they were what people would have called a sales company, not a tech company. Decisions were based on sale opportunities, only. No one gave a shit about code quality, devs didn’t stay long and often were frustrated when they applied at our company.

Which one was more successful? Unfortunately they were... we were often slowed down by discussions about code quality, ways of working, ethics and disrespect for revenue driven decisions.

Maybe it was just an exception? But since then I have become skeptical when I hear about “tech” companies. I feel we nerds need to be value driven, too, and not die in beauty.


https://arxiv.org/abs/1902.10811 is an useful counterpoint to this article's comments on ImageNet overfitting (esp e.g. §3.3, "Few Changes in the Relative Order").

The problem with this type of inflation is that it's lopsided. Assets inflate -- equities and real estate. That's great for asset owners. But wages haven't really increased, because most workers don't "need" to own equities, and most Americans already own a house, so they aren't affected by rising rents and home prices.

This is why rent can raise 10% in one year -- the largest expense for nearly everyone -- and you can have 1% inflation. Most people aren't renters. So they aren't paying more for their rent. In fact, when bond prices go down (from this manipulation), mortgages get cheaper, so most people are paying LESS for the same house / mortgage.

Equities go through the roof.

If you're a laborer / renter, this is like a double gut punch. If you're a capitalist aristocrat, it's like a double gift horse.

And, as far as inflation goes, at least the way the Fed measures it -- it doesn't have a huge effect.

Commodities are so globalized now, and the US isn't 50% of the global economy anymore -- more like 20%. So strong upward pressure on commodities here, doesn't have a huge impact on commodities prices.


Here's a nifty visualization of magnetic pole location and strength from NOAA-

https://maps.ngdc.noaa.gov/viewers/historical_declination/

edit: Some more on this stuff here (probably to be taken with a grain of salt, but neat no less)-

https://magneticreversal.org/


The primary social function of giving advice is a domination game (http://www.overcomingbias.com/2015/03/advice-shows-status.ht..., http://www.overcomingbias.com/2014/01/advice-isnt-about-info...) - that's why there is a lot of shitty advice. That does not mean there are no good business theories that cover startups. Some of them are scientific theories with all the required rigour - but not all theories need to be properly scientific to be useful, in our daily life we live with lots of ex-post theories, they are not perfect but are still useful. By the way I am the author of one non-scientific startup theory myself https://medium.com/hackernoon/aggregators-bffd36063a72 and I hope it can be useful:) There are also useful advice. It is good to read them, evaluate, adjust them you your circumstances, etc. In the end you need to decide for yourself, but they show you the possibilities.

Stanislaw Lem 1921-2006

https://www.washingtonexaminer.com/weekly-standard/stanislaw...

> Lem's IQ, as he mentioned in passing in an autobiographical essay (it was measured when he was in high school), was above 180, but no one who read many of his books needed that datum to conclude that here was an unusually powerful and wide-ranging intelligence. The son of a physician, Lem was trained in the sciences. Biology was his field, but in his mid-twenties he became a research assistant at what he described as a "kind of clearinghouse for scientific literature" in many disciplines coming into Poland from around the world.

Examples, I think especially are

- "Eden" -- https://en.wikipedia.org/wiki/Eden_%28Lem_novel%29

Not as extreme as the other two in highlighting the impossibility of meaningful contact. Lem becomes much more skeptical later in his career. Because they actually do make contact to one individual alien and communicate with it. In Fiasco there is some communication but it turn out to be all completely different than expected base don human assumptions (and it all ends in the humans destroying everything, of course all from good intentions). In "Solaris" there is no meaningful communication at all, one can't even say there are misunderstandings.

- "The Invincible" -- "https://en.wikipedia.org/wiki/The_Invincible"

This is about non-biological self-replicating swarming alien micro-robotic life forms.

- "Solaris" -- https://en.wikipedia.org/wiki/Solaris_%28novel%29

> Solaris chronicles the ultimate futility of attempted communications with the extraterrestrial life inhabiting a distant alien planet named Solaris. The planet is almost completely covered with an ocean of gel that is revealed to be a single, planet-encompassing organism. Terran scientists conclude it is a sentient being and attempt to communicate with it.

- "His Master's Voice" - https://en.wikipedia.org/wiki/His_Master%27s_Voice_%28novel%...

> The novel is written as a first-person narrative, the memoir of a mathematician named Peter Hogarth, who becomes involved in a Pentagon-directed project (code-named "His Master's Voice", or HMV for short[2]) in the Nevada desert, where scientists are working to decode what seems to be a message from outer space (specifically, a neutrino signal from the Canis Minor constellation)

- "Fiasco" -- https://en.wikipedia.org/wiki/Fiasco_%28novel%29

> The book is a further elaboration of Lem's skepticism: in Lem's opinion, the difficulty in communication with alien civilizations is cultural disparity rather than spatial distance. The failure to communicate with an alien civilization is the main theme of the book.


The part about Lee Iacocco introducing the Mustang reminded me about this passage from

In Search of Stupidity: Over 20 Years of High-Tech Marketing Disasters by Merrill R. Chapman

> In the auto industry, a classic example is the Ford Falcon. The brainchild of "whiz kid" Robert McNamara, the Falcon was designed from the get-go as a "people's car." In other words, it couldn't go very fast, it got good gas mileage, and it was economical to run. Extolling these virtues was the car's deliberately plug-ugly design, one that proclaimed the vehicle was in the service of the lumpen proletariat, those who only drive and serve. The lumpen proletariat didn't appreciate the sentiments the Falcon reflected, and although people who couldn't afford anything more bought the Falcon, they drove the car without joy and bought few of the optional accessories that made selling the car profitable.

> On the other hand, the Ford Mustang when it was released in 1964 was a phenomenon, and Ford couldn't make enough of them to meet demand. Mustangs were fun, sexy, and desirable. Mustang owners were intelligent and cool people with a great sense of value, the type of folks you wished would invite you to a barbecue at their place. Of course, the Mustang also wouldn't go very fast (though it looked like it could), got good gas mileage, and was very economical to run. This is because it was, underneath its alluring sheet metal, nothing more than a reskinned Ford Falcon. But by dint of good design and the addition of key features that proclaimed the car wasn't for old farts (such as a snazzy steering wheel and bucket seats) and sporty options (such as high-profit, high-performance engines), the Mustang became a car you could aspire to whereas the Falcon was just a cheap set of wheels.


I kind of wish there was a lisp with the "recompile on error and continue" feature of Common Lisp but without a massive standard library. A standalone SBCL program seems to be around 40MB at minimum. It feels like it would be doable if only CL wasn't designed with the kitchen sink included. Recently I've gotten into Janet[0] and really like the language, although I do miss CL's recompilation magic at times. It feels like a Lisp dialect with similar style to Lua: tables, coroutines, small language core, embeddable as a single C file, etc.

[0] https://www.janet-lang.org


> The third was that js has no operator overloading, so I had to use .__add__() for example to call the python add operator.

I expect this wouldn't have worked in the long run as these methods are often just part of the protocol e.g. even `a == b` will try `type(a).__eq__(a, b)` then fall back to `type(b).__eq__(a)` (~~and then it'll do some weird stuff with type names IIRC~~[0]).

And most operators are not considered symmetric so the fallback is not the same as the initial (even `+` has `__add__` and `__radd__`, also `__radd__` might be called first depending on the relationship between type(a) and type(b)).

And then there's the "operations" which fallback to entirely different protocols e.g. `in` will first try to use `__contains__`, if that doesn't exist it uses `iter()` which tries to use `__iter__` but if that doesn't exist it falls back to calling `__getitem__` with non-negative sequential integer.

Which is why sometimes you define `__getitem__` for a pseudo-mapping convenience and then you get weird blowups that it's been called with `0` (you only ever expected string-keys). Because someone somewhere used `in` on your object and you hadn't defined `__iter__` let alone `__contains__`.

Good times.

[0] I misremembered: it's for ordering (not equality) in Python 2[1] `a < b` will first invoke `type(a).__lt__(a, b)`, then if that's not implemented fall back to `type(b).__ge__(a)`, and if that's not implemented either it'll fall back to a few hard-coded cases (e.g. None is smaller than everything) and finally to `(type(a).__name__, id(a)) < (type(b).__name__, id(b))`. That is the order of independent types with no ordering defined is the lexicographic order of their type names, and if they're of the same type it's their position in memory

[1] where there's always an ordering relationship between two objects, one of the things I'm most graceful Python 3 removed even if it's sometimes inconvenient


I live by a couple of (vaguely related) principles:

a) Consciousness is just your brain trying to anticipate the future.

b) Your brain compresses (normalises?) repetition in memories. So even if day to day events happen at normal speed, the years seem to fly by when you reflect on them. If your life seems to be flying-by then maybe you need more novelty.


"The Art of Electronics", by Horowitz and Hill, is a respected textbook for this. The ARRL Handbook has many explained schematics for radio.

For modern commercial products, mostly you'll have some big special purpose ICs plus some minor components for power and noise management. The schematic won't tell you much because all the action is inside the ICs.

Here's something of mine you can look at, a design on Github made with KiCAD.[1] The schematic is here.[2] All the files to make a board are there, and both I and others have had working boards fabbed from those files.

The application is unusual - it's an interface for antique Teletype machines that need signals of 60mA at 120V. There are no off the shelf ICs for that. So there's a custom switching power supply to make that voltage from a 5V USB port. The README file for the project explains how it all works. It has all the extra parts you need in the real world to handle USB hot-plugging, keep the switcher noise out of the USB connection, keep RF noise down, and protect the circuit against a shorted output or a big inductive kick-back from the load.

The data sheet for the LT3750, the controller for the switching power supply, is essential when reading the schematic.[3]

You can download KiCAD and play with the files. You can also download LTSpice and run a simulation; the files for that are in the repository.

This is complex enough to be non-trivial, yet simple enough to be understandable.

[1] https://github.com/John-Nagle/ttyloopdriver [2] https://raw.githubusercontent.com/John-Nagle/ttyloopdriver/m... [3] https://www.analog.com/media/en/technical-documentation/data...


This is something that would be extremely useful on microcontrollers, where you do lots of event-driven programming. Any sane design ends up with a state machine (or a hierarchy of state machines) driven by events generated in interrupts.

Coming from Clojure, I wished for something like core.async on microcontrollers for a long time — a way to convert most of my state machine into sequential code, with the complexity hidden, all while keeping everything in C (converted/generated during compile).


If you're getting burnout doing agile, you're doing agile wrong.

Don't do sprints. Have a continuous backlog. Don't do overtime. Don't make estimates. Always do the simplest thing. Only ever do the most important thing, as defined by the stakeholder.

I've written and talked about this at great length. The fact people suggest agile gives you burnout reinforces my experience that Scrum is largely misinterpreted and people incorrectly focus on sprint commitments. If Scrum is so commonly misinterpreted, it is flawed.

https://www.linkedin.com/pulse/scrum-makes-you-dumb-daniel-j...

https://youtu.be/k9duArRuSjQ


I am no Fukushima apologist but that is a click-baity headline.

> The newly detected Fukushima radiation was minute... too low to pose a health concern... Cesium-137 levels some 3,000-times higher than those found in the Bering Sea are considered safe for human consumption under U.S. Environmental Protection Agency drinking water standards, officials said.

The standard for science articles on HN should involve significant concentrations of [bad thing], not just trace amounts. See: Enrico Fermi's "Caesar's last breath" exercise.[1]

For this to be significant there should be something disputing the EPA standards (which may very well be too lenient). But there's no such dispute mentioned in the report. It's just, "contaminants were found."

[1] http://www.hk-phy.org/articles/caesar/caesar_e.html


Probably want to add a PM (particulate matter) sensor to the mix.

I've built a few similar devices using ESP chips and various laser dust sensors, highly recommend the Plantower PMS5003 - laser defraction, PM1-PM10 accuracy, ~$20 and there are good libraries available on GitHub for interacting with it.

Edit: https://twitter.com/zensavona/status/1091949965306257409


I've spent the last 3 years driving around Africa - 35 countries and over 50,000 miles. In more than a few cities the traffic (and pollution) have been absolutely horrific, and I've often wondered if there is a large city where biking is much more common.

Biking is very common in rural areas.

Unfortunately I've had to skip Eritrea, although I really wanted to get there. Because of the recent peace deal with Ethiopia the borders are wide open.. but that means for me nobody really knows how I can enter at a land border legally... there won't be anyone to stamp my passport, and so it's very likely I'll be arrested by the first Police that see me, even with a valid visa.

The ambassadors in Ethiopia and Djibouti were more than happy to give me a visa, but they had no idea what would happen if I tried to drive in. Maybe I'll have to go back!


What would be grandma-proof equivalent of the implicit firewall provided by ipv4 NAT?

The Right Way would be to make informed decisions per-port and per-protocol, but that's a nightmare to set up, and to maintain.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: