How will you know if your integer type is adequate for the problem at hand, if you don‘t know its size?
Choosing the right type is a function of signedness, upper/lower bound (number of things), and sometimes alignment. These are fundamental properties of the problem domain. Guesstimating is simply not doing the required work.
C specifies minimum sizes. That's all you need 99% of the time. I'm always annoyed by the people who assume int is 32-bits. You can't assume that in portable code. Use long or ensure the code works with 16-bit ints. That is how the type system was meant to be used. int was supposed to reflect the natural word size of the machine so you could work with the optimum integral type across mismatched platforms. 64-bit platforms have mostly abandoned that idea and 8-bit never got to participate but the principle is embedded in the language.
> The programmer should rather prescribe intent and shouldn't constantly think about what size this should exactly have.
You still have to constantly think about size! Except now you have to think about _minimum_ size, and possibly use a too big data type because the correctly sized one for your platform had a guaranteed minimum size that's too small for what you want to do.
It does agree with what I intended to say. The values a type needs to be able to represent are very much part of the intent of a variable. What the programmer doesn't need to specify, is with what bit pattern and what exact bits these values are going to be represented. There are use cases where you in fact do want to do that, but then that implies that you actually care about the wrapping semantics and are going to manipulate bit patterns.
The idea is mostly that we shouldn't worry. The user of the lib on an arduino will feed it arduino sized problems and the amd64 user will likewise larger problems. Again I think just think of the transition from 32 to 64 bit. Most ranges are "input"/"user " dependent and it would have been needlessly messy to have to even with automatic conversion help rewrite say every i32 to i64 or which ones to convert.
As I said, today when it really matters we can use stdint. But I feel it would have been too burdensome to mandate on the standard in the early days of C.
Like fucking what? If you do any TS.. you just use unsigned whatever fastest type you have on targeting platform, and you do NOT care. 16bit? wrapping at 1 min? Thats eternity even on 2MHz 6502... You just count and substract to get difference. Usually you are <1000ms range.. so wrapping is not a problem..
If you target about 32bit and 16bit, you either think about and using long (on 16 bit is more costly) or you just count seconds.. Or ticks.. or whatever you need.
I'm not writing the app. The app was written according to your preferred design and I'm compiling it for Arduino. You say to just use int because it always has enough bits, then you say to sometimes use long because int might not have enough bits.
I dont know the indented use. If you need delay or difference, 16bit is more than enough. If you writting generic clock w/ ms accuracy, it will be not enough.
You either split it or use larger storage. Its not rocket science...
When you choose unsigned int for that type, you compare against UINT_MAX and return an EOVERFLOW, ENOMEM, EOUT_OF_RANGE or whatever you like when someone sets a timer greater than that. Or you choose another type, i.e. unsigned long which is guaranteed to take values to at least 4294967295. I happen to program for the Arduino platform recently and milliseconds in the Arduino API are typed unsigned long.
If your decision is that you really want to store all 32-bit values then you use uint_least32_t or uint_fast32_t depending if you are also resource constrained.
Choosing the right type is a function of signedness, upper/lower bound (number of things), and sometimes alignment. These are fundamental properties of the problem domain. Guesstimating is simply not doing the required work.