As I said, today when it really matters we can use stdint. But I feel it would have been too burdensome to mandate on the standard in the early days of C.
Like fucking what? If you do any TS.. you just use unsigned whatever fastest type you have on targeting platform, and you do NOT care. 16bit? wrapping at 1 min? Thats eternity even on 2MHz 6502... You just count and substract to get difference. Usually you are <1000ms range.. so wrapping is not a problem..
If you target about 32bit and 16bit, you either think about and using long (on 16 bit is more costly) or you just count seconds.. Or ticks.. or whatever you need.
I'm not writing the app. The app was written according to your preferred design and I'm compiling it for Arduino. You say to just use int because it always has enough bits, then you say to sometimes use long because int might not have enough bits.
I dont know the indented use. If you need delay or difference, 16bit is more than enough. If you writting generic clock w/ ms accuracy, it will be not enough.
You either split it or use larger storage. Its not rocket science...
When you choose unsigned int for that type, you compare against UINT_MAX and return an EOVERFLOW, ENOMEM, EOUT_OF_RANGE or whatever you like when someone sets a timer greater than that. Or you choose another type, i.e. unsigned long which is guaranteed to take values to at least 4294967295. I happen to program for the Arduino platform recently and milliseconds in the Arduino API are typed unsigned long.
If your decision is that you really want to store all 32-bit values then you use uint_least32_t or uint_fast32_t depending if you are also resource constrained.