In this paper, we establish novel data-dependent upper bounds on the generalization error through the lens of a ``variable-size compressibility'' framework that we introduce newly here. In this framework, the generalization error of an algorithm is linked to a variable-size `compression rate' of its input data. This is shown to yield bounds that depend on the value of the input training data sample at hand, rather than on its unknown distribution. Our new generalization bounds that we establish are tail bounds, tail bounds on the expectation, and in-expectations bounds. Moreover, it is shown that our framework also allows to derive general bounds on any function of the input data and output hypothesis random variables. In particular, these general bounds are shown to subsume and possibly improve over several existing PAC-Bayes and data-dependent intrinsic dimension-based bounds that are recovered as special cases, thus unveiling a unifying character of our approach.