A modern processor has a lot in it, but you can understand it as a finite automaton. Just this realisation made computer architecture and processor design far less intimidating to me. It also shows that, in practice, if you structure and manipulate your states carefully, you can get very far with finite automata.
I don't understand this one. Surely a modern processor is like a Turing machine and not a finite state automata. For example, a DFA/NFA can't express simple algorithms like matching parentheses, while obviously a modern processor can.
The processor plus the RAM is a Turing machine. If you decide in advance what input go to the input pins, regardless of the output, and you just record what goes out the output pins, the processor is pretty much a DFA.
When you add the RAM, you provide a feedback loop: input depends on previous outputs (in plainspeak, when you write to the RAM, you can read it later). No feedback loop, no Turing machine.
The processor chip itself is designed as a DFA (as is very common in digital logic design though EEs prefer the term finite state machine or FSM). These slides do a good job of explaining this perspective on digital design. [1]
It becomes much more powerful when given the ability to read and write from arbitrary storage, they same way theoretical DFAs become turing machines when given the same power.
I am glossing over the complications that on-chip caches and memory hierarchy introduce, but I feel the core idea of a processor chip as a DFA is still worthwhile.
It's basically because you don't have infinite storage -- I could, given enough time, feed your PC a string that it wouldn't be able to tell was unbalanced. Of course, to do that I would have to overflow your available storage which would take a while.
For all practical purposes, your PC adequately models a Turing machine. In point of literal fact, it doesn't because it only has finite storage. In practical usage, it is sometimes helpful to consider various states your code may enter and how they interact. For understanding how the system actually works, the author appears to have found the state-switching model to be a useful one, and indeed I understand that it's common to model the internal state of various devices using finite automata.
A PC can still read and write from arbitrary memory locations which means it can know a lot about it's past states, so the memory limit makes it a deterministic linear-bounded automata.
But the LBA is limited to the size of the input string, while a PC has a fixed but insanely large number of states (2^{number of bits of storage}) and can't in theory accept arbitrarily large strings.
In the somewhat twisted view of the machine I'm putting forward, it's not accessing 'arbitrary memory locations' as memory locations, rather the content of that memory location is part of the input state.
I don't understand this one. Surely a modern processor is like a Turing machine and not a finite state automata. For example, a DFA/NFA can't express simple algorithms like matching parentheses, while obviously a modern processor can.