# Lipschitz continuous gradient

Last time, we talked about strong convexity. Today, let us look at another important concept in convex optimization, named *Lipschitz continuous gradient* condition, which is essential to ensuring convergence of many gradient decent based algorithms. The post is also mainly based on my course project report.

It is worth noting that there exits a duality (Fenchel duality) between strong convexity and Lipschitz continuous gradient, which implies that once we have a good understanding of one, we may easily understand the other one.

**Note:** Indeed, all the results in this post can be easily proved via the same method adopted in the post of strong convexity. This is the beauty of duality!

As usual, let’s us first begin with the definition.

A differentiable function is said to have an L-Lipschitz continuous gradient if for some

**Note:** The definition doesn’t assume convexity of .

Now, we will list some other conditions that are related or equivalent to Lipschitz continuous gradient condition.

**Note:** We assume that the domain for and are both , and hence convex.

### Relationships Between Conditions

The next proposition gives the relationships between all the conditions mentioned above. If you have already mastered all the tricks in the post of strong convexity, you can easily prove all the results by yourself. Try it now!

PropositionFor a function with a Lipschitz continuous gradient over , the following implications hold:If we further assume that is convex, then we have all the conditions are equivalent.

*Proof:* Again, the key idea behind the proof is transformation, i.e., transform a with Lipschitz continuous gradient to another convex function , which enables us to apply the equivalent conditions for convexity.

: It follows from the first-order condition for convexity of , i.e., is convex if and only if

: It follows from the monotone gradient condition for convexity of , i.e., is convex if and only if

: It simply follows from the definition of convexity, i.e., is convex if

: It simply follows from the Cauchy-Schwartz inequality.

: It simply follows from the Cauchy-Schwartz inequality.

: Interchanging and in [7] and re-arranging, we have

As , we get .

: Let , we have

Multiplying the first inequality with and second inequality with , and adding them together yields

where the second inequality follows from the inequality

If is convex, we can easily show , which implies that all the conditions are equivalent in this case.

: Let us consider the function , which obtain its optimum at as is convex. Moreover, we have is convex since holds, which implies that

Taking minimization with respect to on both sides, yields,

Re-arranging gives the result.

### Citation

Recently, I have received a lot of emails from my dear readers that inquire about how to cite the content in my blog. I am quite surprised and also glad that my blog posts are more welcome than expected. Fortunately, I have an arXiv paper that summarizes all the results. Here is the citation form:

Zhou, Xingyu. “On the Fenchel Duality between Strong Convexity and Lipschitz Continuous Gradient.” arXiv preprint arXiv:1803.06573 (2018).

**THE END**

Now, it’s time to take a break by appreciating the masterpiece of Monet.

**The Garden at Sainte-Adresse**

*courtesy of www.Claude-Monet.com*