I have been dabbling in learning basic things about probability theory and (of course) being of the school of abstract nonsense I have tried to understand things in its language. I apologize if this question is therefore somehow obvious.
As I understand it, if $X$ is a probability space with measure $\mu$ and $f \colon X \to \mathbb{R}$ is a random variable (measurable function), its "probability distribution function" is actually the pushforward measure $f_*(\mu)$, which is only a function multiple of Lebesgue measure under some kind of regularity hypothesis that I'm not asking about now. If $g$ is another such random variable, the joint distribution is obtained by forming the map $(f, g) \colon X \to \mathbb{R}^2$ and pushing forward again, getting $(f,g)_*(\mu)$. Independence of these random variables is Fubini's theorem, that for any $h(x,y)$ on $\mathbb{R}^2$, we have $$\int h(x,y) d(f,g)_*(\mu) = \iint h(x,y) df_*(\mu) dg_*(\mu).$$
Okay. It therefore appears that when $X = [0,1]$, the following dream is possible: find a pair of functions $f,g$ as above, say with values also in $[0,1]$, each of which is uniformly distributed in that their distributions are both Lebesgue measure, which are also independent. Then $(f,g)_*(\mu)$ is Lebesgue measure on $[0,1]^2$. This seems to me to be rather different from the usual construction.
I can imagine what the pair $(f,g)$ must look like: it traces a curve in $[0,1]^2$ that is obviously space-filling, since the pullback of any small box must have positive measure, and so must at least be nonempty; i.e. the curve is dense.
Question: Are the component functions of the standard space-filling curves (say, the Hilbert curve) independent and uniformly distributed? Can it be shown directly that they are independent, i.e. without constructing two-dimensional Lebesgue measure and invoking Fubini's theorem explicitly? Is this a valid way of getting Lebesgue measure on $[0,1]^2$?