Is it easier to write a program that behaves the way you want and a verification to prove the program behaves the way you want, or to just write the program? What if you goofed writing the verification, and now you have a program that is guaranteed to misbehave? How do you verify the verification?
Validation vs verification. Validation: "is this the right spec", verification: "does this meet the spec".
Turns out once you have a formal specification, you can do all sorts of things with it that you couldn't do with just the code sitting alone. You can prove properties about your specification. For example, seL4. Even the abstract specification is quite big and can be hard to reason about in complex cases. How do you know that spec is the one you want? Well, they wrote down some security properties they'd like it to have, and then proved that the abstract spec has those properties (eg, you can configure it to act like a separation kernel; it enforces isolation given an isolated initial condition; it enforces integrity).
How do you know those are the right properties? Well, that's the challenge of software validation isn't it :) Ultimately formal methods doesn't make that problem much harder. You just need to figure out a way to formally phrase the properties you want. A lot of software has very fuzzy validation criteria, and isn't necessarily suited for formal verification for that reason.
In addition to other good answers, there is also the fact that verification doesn't necessarily have to be "deep": "shallow" verification of only some security properties can also be very useful. For example, it may be hard to verify the "correctness" of a web server since the specification of what a web server should do is quite complex.
But (for example, for a web server written in Ada or C) you can also do without a specification and just apply verification tools to only check things like "all memory accesses are valid", proving that you have no buffer overflows that would cause denial of service or security holes.
Of course someone still has to write the verification tool correctly to ensure that it actually gives the safety guarantees you want, but at least it's not you who has to worry about that ;-) And specifying the absence of buffer overflows is comparatively simple.
cmrx64 addressed the validation well. I'll add that there's formal specification tools for requirements (the What) such as Software Cost Reduction and Abstract, State Machines that have helped there. The trick on those is they help be precise about the environment, inputs, and outputs in a way human reviewers can understand. So, the reviewers think on and discuss stuff with their thoughts shared to others in English and the formalism to spot the problems or even improvement opportunities. The main gotcha there is ensuring English and formal specs don't diverge.
Far as verified stacks, the throwaway linked the basic concept but here's two works (old and new) on verified provers so you can see it in action:
The former was early work by Appel et al on getting the TCB of provers down to a few thousand lines of code. The other uses composable logics and small implementation that itself uses the verified tooling of Myreen et al whose research covers a lot of angles:
Most of their work is done in Isabelle/HOL which HOL/Light is aiming to verify in a lightweight way. The idea being you just have to trust the code of the kernel (hundreds to thousands of lines) that checks proof terms and the specs that are fed into them. They're doing everything they can in the proof assistants so the minimal checkers will be able to confirm the operations.