This post will be essentially about functions of bounded variation of one variable. The main source is the book “Functions of Bounded variation and Free Discontinuity Problems” by Ambrosio, Fusco and Pallara. Before we give the definition of a bounded variation function let us recall what exactly does is mean for a function to belong in W^{1,1}(0,1). Recall that any function f\in L^{1}(0,1) can be seen as a distribution T_{f} i.e.  as a bounded linear functional on C_{c}^{\infty}(0,1)T_{f}:C_{c}^{\infty}(0,1)\to\mathbb{R}  with

T_{f}(\phi)=\int_{0}^{1}f(x)\phi(x)dx,\quad \forall \phi\in C_{c}^{\infty}(0,1).

In that case we say that the distribution T_{f} is representable by the function f. Given any distribution T we can define its distributional derivative  DT  to be the distribution defined as

DT(\phi)=-D(\phi'),\quad \forall \phi\in C_{c}^{\infty}(0,1).

In the special case where the distribution T can be represented by a function in the way we show above the distributional derivative DT_{f} will be

DT_{f}(\phi)=-\int_{0}^{1}f(x)\phi'(x)dx,\quad \forall \phi\in C_{c}^{\infty}(0,1).

For notational convenience when the distribution T_{f} is representable by the function f we write Df:=DT_{f}.  Suppose now that not only the distribution is representable by a function f but  its distributional derivative is also representable  by a L^{1}(0,1) function, say g. Then the relationship between f and g will be

\int_{0}^{1}g(x)\phi(x)dx=-\int_{0}^{1}f(x)\phi'(x),\quad \forall \phi\in C_{c}^{\infty}(0,1).

In that case we say that f\in W^{1,1}(0,1) and we can denote Df=g. However there is another more general way to interpret the fact that a function represents a distribution.  Suppose that \mu is (signed) Radon measure on (0,1) i.e. a set real function defined on the Borel sets which is additive.  Note that we defined that measure \mu to take only real values and not the value \infty. In particular a (signed) Radon measure is always a finite measure or us. Let us define now the distribution T_{\mu} as follows:

T_{\mu}(\phi)=\int_{0}^{1}\phi(x)d\mu(x),\quad \forall \phi\in C_{c}^{\infty}(0,1).

One can check that this is a distribution indeed. We say that \mu represents the distribution T_{\mu}. Furthermore suppose now that the Radon measure \mu is absolutely continuous with respect to the Lebesgue measure on (0,1). From the Radon-Nikodym theorem there exists a L^{1}(0,1) function g such that

\mu(B)=\int_{0}^{1}\mathcal{X}_{B}(x)g(x)dx,\quad \forall B\in \mathcal{B}(0,1)

and

\int_{0}^{1}\phi(x)d\mu(x)=\int_{0}^{1}g(x)\phi(x)dx,\quad \forall \phi \in L_{\mu}^{1}(0,1).

This means that instead of saying that a distribution T_{f} is representable by a function f\in L^{1}(0,1) one could say that T_{f} can represented by a Radon measure which is absolutely continuous with respect to The Lebesgue measure with corresponding density f. Thus:

The space \mathbf{W^{1,1}(0,1)} consists exactly of all the functions in \mathbf{L^{1}(0,1)} whose distributional derivative can be represented by a Radon measure which is absolutely continuous with respect to Lebesgue measure.

However you can have functions whose distributional derivative cannot represented by  such a measure but it can be represented by a general Radon measure. This leads to the following definition:

Definition:(BV(0,1))

Let f\in L^{1}(0,1).  We say that f  is of bounded variation if its distributional derivative can be represented by a Radon measure . We denote this measure by Df. This means that

\int_{0}^{1}\phi(x)dDf(x)=-\int_{0}^{1}f(x)\phi'(x)dx,,\quad \forall B\in \mathcal{B}(0,1).

We denote this space with BV(0,1). We also define the total variation of f to be the total variation |Df|(0,1)of the measure Df i.e.

|Df|(0,1)=\sup\left \{\sum_{i=1}^{\infty}|Df(B_{i})|\right \}

where the supremum is taken over  all the Borel disjoint partitions (B_{i})_{i=1}^{\infty} of (0,1).

Let us note here that a signed Radon measure has always finite total variation.

REMARK: By a smoothing argument one can show that the above relation is true for any \phi\in C_{c}^{1}(0,1).

But wait a minute? What have we learned at school? Wasn’t a bounded variation function, a function that does nto oscillate too much? Didn’t we use some partitions of (0,1), define the jumps of f on these partitions and then taking the supremum of the total size of the jump? Isn’t the “correct” definition something like that:

Definition: (Pointwise variation)

Let f:(0,1)\to\mathbb{R}. We define the pointwise variation of f in (0,1), pV(f,(0,1)) to be the following supremum

\sup\left \{\sum_{i=1}^{n-1}|f(t_{i+1})-f(t_{i})|:\,n\ge 2,\;0<t_{1}<\cdots<t_{n}<1 \right \}

 Let us write a few remarks about the above definition. It is very easy to check that every function f:(0,1)\to\mathbb{R} with finite pointwise variation is bounded. Also any bounded monotone function f defined on (0,1) has pointwise variation equal to |f(1_{-})-f(0_{+})| (left limit to 1 and right limit to 0 respectively.) We can also check that any function f with pV(f,(0,1))<\infty can be represented as a difference of two bounded increasing functions f_{1} and f_{2} such that pV(f,(0,1))=pV(f_{1},(0,1))+pV(f_{2},(0,1)). Indeed:

Set

g(t)=\sup\left \{\sum_{i=1}^{n-1}|f(t_{i+1})-f(t_{i})|:\,n\ge 2,\;0<t_{1}<\cdots<t_{n}\le t \right \}

and

f_{1}(t)=\frac{g(t)+f(t)}{2},\quad t\in(0,1).

f_{2}(t)=\frac{g(t)-f(t)}{2},\quad t\in(0,1).

In order to see that f_{1} is increasing as well note that if s\le t we have

g(t)=\sup\left \{\sum_{i=1}^{n-1}|f(t_{i+1})-f(t_{i})|:\,n\ge 2,\;0<t_{1}<\cdots<t_{n}\le t \right \}

=\sup\left \{\sum_{i=1}^{n-1}|f(t_{i+1})-f(t_{i})|:\,n\ge 2,\;0<t_{1}<..<s<..<t_{n}\le t \right \}

which means that g(t)\ge g(s)+|f(t)-f(s)|\ge g(s)-(f(t)-f(s)) and hence f_{1}(t)\ge f_{1}(s). Similarly f_{2} is increasing and we have of course f=f_{1}-f_{2}. It remains to show that pV(f,(0,1))=pV(f_{1},(0,1))+pV(f_{2},(0,1)). The key point here is to observe that

g(0_{+})=0 and g(1_{-})=pV(f,(0,1)).

\mathbf{(i)} \mathbf{g(0_{+})=0}

Suppose that this not true. Then there exists a constant C>0 and a sequence (t_{m})_{m\in \mathbb{N}} and tends to 0 such that g(t_{m})> C for every m\in \mathbb{N}. This means that for every m\in \mathbb{N} there exist 0<t_{1}^{m}<\cdots<t_{n_{m}}^{m}\le t_{m} such that

\sum_{i=1}^{n_{m}}|f_{t_{i}}-f_{t_{i+1}}|>C\quad\quad (1)

Without less of generality, passing to a subsequence of (t_{m})_{m\in \mathbb{N}} if necessary, we can assume that t_{n_{m+1}}<t_{1}^{m} for every m. Now we have the following sequence decreasing to 0

t_{n_{1}},\ldots,t_{1}^{1},t_{n_{2}},\ldots,t_{1}^{2},t_{n_{3}},\ldots,t_{1}^{3},\ldots\quad\quad (2)

But (1) and (2) imply that pV(f,(0,1))=\infty which is a contradiction.

\mathbf{(ii)} \mathbf{g(1_{-})=pV(f,(0,1))}

This is immediate as g is increasing g(t)\le pV(f,(0,1)) for every t\in (0,1). Thus g(1_{-})\le pV(f,(0,1)). For any partition 0<t_{1}<\cdots<t_{n}<1 there exists a t (any t\in (t_{n},1)) such that

\sum_{i=1}^{n}|f_{t_{i}}-f_{t_{i+1}}|\le g(t).

Thus g(1_{-})=sup_{t\in (0,1)}g(t)\ge pV(f,(0,1)).

We are ready to show that pV(f,(0,1))=pV(f_{1},(0,1))+pV(f_{2},(0,1)). Since both f_{1},\;f_{2} are increasing we have pV(f_{i},(0,1))=f_{1}(i_{-})-f_{i}(0_{+}), i=1,2. Thus

pV(f_{1},(0,1))+pV(f_{2},(0,1))=

\frac{1}{2}(g(1_{-})+f(1_{-})-g(0_{+})-f(0_{+}))+\frac{1}{2}(g(1_{-})-f(1_{-})-g(0_{+})+f(0_{+}))=

2\frac{1}{2}g(1_{-})=pV(f,(0,1))

This is enough for today! In the next post we are going to examine the exact relation of functions of bounded variation and functions that have pointwise variation. We are also going to discuss about the Cantor-Vitali function!

PS: In the last proof, we used a fact that that is true but we did not prove it. can you see what this fact is???

About these ads