Con­di­tional Gra­di­ent Descent (a.k.a. Frank-Wolfe algorithm)

Recently, I read Martin Wainwright’s old paper: A new class of upper bounds on the log partition function, and found that conditional gradient was used in his paper to optimize edge appearance probability.  Sébastien Bubeck gave a nice introduction on the conditional gradient.

About these ads

About zhanxing

Ph.D student in Machine Learning, University of Edinburgh, UK.
This entry was posted in Machine learning and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s