Hacker Newsnew | past | comments | ask | show | jobs | submit | Tangurena2's commentslogin

And the cpu that was designed to implement ADA also failed miserably: the iAPX 432.

https://en.wikipedia.org/wiki/Intel_iAPX_432


The claim that it was designed for Ada was just marketing hype, like the attempt of today of selling processors "designed for AI".

The concept of iAPX 432 had been finalized before Ada won the Department of Defense competition.

iAPX 432 was designed based on the idea that such an architecture would be more suitable for high level languages, without having at that time Ada or any other specific language in mind.

The iAPX designers thought that the most important feature that would make the processor better suited for high-level languages would be to not allow the direct addressing of memory but to control the memory accesses in such a way that would prevent any accesses outside the intended memory object.

The designers have made many other mistakes, but an important mistake was that the object-based memory-access control that they implemented was far too complex in comparison with what could be implemented efficiently in the available technology. Thus they could not implement everything in one chip and they had to split the CPU in multiple chips, which created additional challenges.

Eventually, the "32-bit" iAPX432 was much slower than the 16-bit 80286, despite the fact that 80286 had also been contaminated by the ideas of 432, so it had a much too complicated memory protection mechanism, which has never been fully used in any relevant commercial product, being replaced by the much simpler paged memory of 80386.

The failure of 432 and the partial failure of 286 (a very large part of the chip implemented features that have never been used in IBM PC/AT and compatibles) are not failures of Ada, but failures of a plan to provide complex memory access protections in hardware, instead of simpler methods based on page access rights and/or comparisons with access limits under software control.

Now there are attempts to move again some parts of the memory access control to hardware, like ARM Cheri, but I do not like them. I prefer simpler methods, like the conditional traps of IBM POWER, which allow a cheaper checking of out-of-bounds accesses without any of the disadvantages of the approaches like Cheri, which need special pointers, which consume resources permanently, not only where they are needed.


The other CPU that was designed for Ada succeeded spectaculary:

https://datamuseum.dk/wiki/Rational/R1000s400


I do not know much about the architecture of Rational/R1000s400, but despite that I am pretty certain the claims that it was particularly good for implementing Ada on it were not true.

Ada can be implemented on any processor with no particular difficulties. There are perceived difficulties, but those are not difficulties specific to Ada.

Ada is a language that demands correct behavior from the processor, e.g. the detection of various error conditions. The same demands should be made for any program written in any language, but the users of other computing environments have been brainwashed by vendors that they must not demand correct behavior from their computers, so that the vendors could increase their profits by not adding the circuits needed to enforce correctness.

Thus Ada may be slower than it should be on processors that do not provide appropriate means for error detection, like RISC-V.

However that does not have anything to do with the language. The same problems will affect C, if you demand that the so-called undefined behavior must be implemented as generating exceptions for signaling when errors happen. If you implement Ada in YOLO mode, like C is normally implemented, Ada will be as fast as C on any processor. If you compile C enabling the sanitizer options, it will have the same speed as normal Ada, on the same CPU.

In the case of Rational/R1000s400, besides the fact that in must have had features that would be equally useful for implementing any programming language, it is said that it also had an Ada-specific instruction, for implementing task rendez-vous.

This must have been indeed helpful for Ada implementers, but it really is not a big deal.

The text says: "the notoriously difficult to implement Ada Rendez-Vous mechanism executes in a single instruction", I do not agree with "notoriously difficult".

It is true that on a CPU without appropriate atomic instructions and memory barriers, any kind of inter-thread communication becomes exceedingly difficult to implement. But with the right instructions, implementing the Ada rendez-vous mechanism is simple. Already an Intel 8088 would not have any difficulties in implementing this, while with 80486 and later CPUs maximum efficiency can be reached in such implementations.

While in Ada the so-called rendez-vous is the primitive used for inter-thread communication, it is a rather high-level mechanism, so it can be implemented with a lower-level primitive, which is the sending of a one-way message from one thread to another. One rendez-vous between two threads is equivalent with two one-way messages sent from one thread to another (i.e. from the 1st to the 2nd, then in the reverse direction). So implementing correctly the simpler mechanism of sending a one-way inter-thread message allows the trivial implementation of rendez-vous.

The rendez-vous mechanism has been put in the language specification, despite the fact that its place would have better been in a standard library, because this was mandated by the STEELMAN requirements published in 1978-06, one year before the closing of the DoD language contest.

So this feature was one of the last added to the language, because the Department of Defense requested it only in the last revision of the requirements.

An equivalent mechanism was described by Hoare in the famous CSP paper. However CSP was published a couple of months after the STEELMAN requirements.

I wonder whether the STEELMAN authors have arrived at this concept independently, or they have read a preprint of the Hoare paper.

It is also possible that both STEELMAN and Hoare have been independently inspired by the Interprocess Calls of Multics (1967), which were equivalent with the rendez-vous of Ada. However the very close coincidence in time of the CSP publication with the STEELMAN revision of the requirements makes plausible that a preprint of the Hoare paper could have prompted this revision.


The 286 worked perfectly fine. If you take a 16-bit unix and you run it on a 286 with enough memory then it runs fine.

Where it went wrong is in two areas: 1) as far as I know the 286 does not correct restart all instruction if they reference a segment that is not present. So swapping doesn't really work as well as people would like.

The big problem however was that in the PC market, 808[68] applications had access to all (at most 640 KB) memory. Compilers (including C compilers) had "far" pointers, etc. that would allow programs to use more than 64 KB memory. There was no easy way to do this in 286 protected mode. Also because a lot of programs where essentially written for CP/M. Microsoft and IBM started working on OS/2 but progress was slow enough that soon the 386 became available.

The 386 of course had the complete 286 architecture, which was also extended to 32-bit. Even when flat memory is used through paging, segments have to be configured.


The 286 worked perfectly fine as an improved 8086, for running MS-DOS, an OS designed for 8088/8086, not for 286.

Nobody has ever used the 286 "protected mode" in the way intended by its designers.

The managers of "extended memory", like HIMEM.SYS, used briefly the "protected mode", but only to be able to access memory above 1 MB.

There were operating systems intended for 286, like XENIX and OS/2 1.x, but even those used only a small subset of the features of the 286 "protected mode". Moreover, only a negligible fraction of the 286 computers have been used with OS/2 1.x or XENIX, in comparison with those using MS-DOS/DR-DOS.



I think the issue is deeper than that. In the US, data about you belongs to the company that owns the hardware that the data is stored on. In the EU, data about you belongs to you.

My point is aside from policy, knowing what you give up to use that free software is a huge part of the equation.

You can at least question an officer in court. Automated stuff is incapable of testifying - which is why traffic camera "tickets" are not enforceable in every state.

Facial recognition performs so poorly on non-white people that you'd have to find the most racist officer saying "they all look the same to me" to get that degree of defectivity.


> You can at least question an officer in court. Automated stuff is incapable of testifying - which is why traffic camera "tickets" are not enforceable in every state.

That's besides the point, you don't need to question a picture with accompanying information (such as location, detected speed).

> Facial recognition performs so poorly on non-white people

You don't need facial recognition. Car with plate XYZ (trivial character recognition) ran a red light, $1000 fine with associated picture proof of the crime sent to the owner of the car as registered in their locality. Done.


“Not sure who was driving”

Most of those red light tickets you’d be surprised but city subreddit advice will be like “ignore it, don’t even look up the ticket number because that acknowledged you received the ticket.” They only mail it to you via regular mail. They have no clue if it actually got to you.


> “Not sure who was driving”

Doesn't matter, fine the owner and let them deal with the driver.


> You can at least question an officer in court.

This is true in theory but not so much in practice. The American legal system only works for people with enough time and/or money to pursue justice (or whatever else they want from the legal system). Like traffic tickets on a road trip - very few people can actually go back to fight them.

Facial recognition is irrelevant if the liability is on whomever the vehicle is registered to.


PATRIOT Act & Bank Secrecy laws make it illegal to notify a person that they are being investigated, or that they are a subject in an investigation.

> It's not unheard of for an officer themselves to be the stalker

This was one of the motivations for passage of the Driver's Privacy Protection Act of 1994. Nowadays, officers need a legitimate reason to run a plate - unless the patrol car is fitted with automatic cameras[1] that look up every plate of every car they drive past.

> The Virginia state police used license plate readers to track people’s attendance at political events; > The New York Police Department used license plate readers to keep track of who visited certain places of worship, and how often;

> Despite all this surveillance, ALPR technology has been repeatedly shown to be unreliable; like other police technologies, ALPRs can and do make mistakes.[2]

Generally, court decisions have held that you have zero expectation of privacy when you are in public spaces. Current license plate standards[3] aim for plates that are not cluttered and are easily read by the human eyeball, despite being wrapped with license plate frames (which usually make the state hard/impossible to read which is the most common failure mode for ANLR[4]). If the reflectivity material (traditionally called "ScotchLite"[5]) is worn out (or defaced), most states require the plate to be replaced.

Notes:

0 - https://en.wikipedia.org/wiki/Driver%27s_Privacy_Protection_... Prior to passage, a slang term for running/looking up the plate/registration of a car with a pretty woman driver was "running a date".

1 - https://sls.eff.org/technologies/automated-license-plate-rea...

2 - https://www.aclum.org/publications/what-you-need-know-about-...

3 - https://www.aamva.org/getmedia/646bcc8a-219b-47d8-b5cd-72624...

4 - https://www.aamva.org/getmedia/0063bf88-cb44-4ab9-90b6-200c8...

5 - https://www.3m.com/3M/en_US/scotchlite-reflective-material-u...

Disclaimers:

I used to work for my state's motor vehicle department and had database/developer access to driving licenses and motor vehicle registration records.

I graduated from a police academy when I was a youngster.*


One simple remedy would be to make companies (that collect such private data) and their directors/executives jointly & severally liable[0] for any identity theft. It should come with "forever" liability equivalent to SuperFund sites[1]

Notes:

0 - Financial penalties would not be limited to "your share" of the penalty. If you have money, and the other parties don't, the plaintiffs can collect from whichever defendant has money.

1 - Everyone who ever owned the site with the toxic waste is liable for the cleanup. This is why when a gas station is sold (in the US), all of the fuel tanks are dug up and replaced - this way, none of the future leakage can be attributed to the previous owners.


Watching what bills show up in my state's legislature, several of them are addressing "Hollywood plots" rather than real-world issues.

For example, one legislator always sponsors a bill (which goes nowhere every year) to outlaw chemtrails. This year's version[0] includes the plot from the SF novel Termination Shock[1]. The word "artillery" was not in any previous session's version, nor was sulfur.

Links:

0 - https://apps.legislature.ky.gov/record/26rs/hb60.html

1 - https://en.wikipedia.org/wiki/Termination_Shock_(novel)#


I don't see bipedal murderbots being commonplace - they're a lot slower than 4-legged "Big Dogs". I think that the Ukraine war has shown that "slaughterbots" are far more likely.

https://www.youtube.com/watch?v=O-2tpwW0kmU


bipedal murderbots... not yet... I think advanced exoskeletons will be there first. They are already testing basic ones in the field:

https://www.businessinsider.com/ukraine-exoskeleton-test-bat...


I work in the state government space. Many targets/victims of ransomware are small/local government agencies and the ransom demands are greater than their annual budgets. Not every agency is big enough to have someone (bored) come in on Sunday, notice stuff getting encrypted and then run in to the server room and hit the big red button like Virginia's legislature in 2021[0].

Many ransoms are far more than the victim can actually pay. Not all ransom payments result in a decryption key that actually works.

Notes:

0 - https://www.nbcnews.com/politics/politics-news/officials-vir...


Most local governments lack the scale and budget to competently maintain their own IT infrastructure. It's not just security but everything. They should outsource the infrastructure layer to a large contractor, or possibly to the state government.

Contracting IT services at that level overpays by a whole number multiple for worse results because the government doesn’t have the in-house expertise to tell when the contractor is doing something wrong. (This is one reason why many construction projects go over budget: someone saved by laying off the engineers, so they pay 2-3x more for contractor A to oversee contractor B, guaranteeing 3+ party disputes for every problem)

What does work better is outsourcing an entire function: if you pay Gmail for email services, you know exactly how much it will cost per user and have an SLA for problems which they can’t blame on you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: