But in almost all cases that doesn't make any sense? Typically the data in different dimensions will have different "units". So there isn't any meaning in the scale in the first place. How could scaling by a single scalar be "more natural"?
If different components of the dataset have different units, I would argue that it is a prerequisite of clustering to first specify the relative importance of each particular unit (thereby putting all units on the same scale). Otherwise, there's no way the clustering algorithm could possibly know what to in certain cases (such as the ::: example).
It's true that there is no intrinsic meaning to the scale, but you must specify at least a relative scale -- how you want to compare (or weigh) different units -- before you can meaningfully cluster the data. Clustering can only work on dimensionless data.
This is the point I think - there's no inherent meaning to the scaling factor(s) as far as overall structure is concerned (they're dimensionless, so the units thing isn't a problem), so the outcome of a clustering algorithm should not depend on it.
Ah I see. As I understand it a general linear map like that isn't what the linked paper means by "scale-invariance", so it wouldn't be considered a violation for a dataset and it's PCA to be given different clusters by your clustering algorithm. It's only the dataset and its scaled up or down counterparts (i.e. the metric is multiplied by a fixed non-zero constant) that are required to get the same clusters for scale-invariance to hold.
In fact the paper doesn't assume that your dataset is contained in a vector space at all. All you have to give a clustering algorithm (as they define it) is a set and a metric function on it.