Impossible Lines

deep_face_1000In a formal field such as Information Theory (IT), the boundary between possible and impossible is well delineated: given a problem, the optimality of a solution can be in principle checked and determined unambiguously. As a pertinent example, IT says that there are ways to compress an “information source”, say a class of images, up to some file size, and that no conceivable solution could do any better than the theoretical limit. This is often a cause of confusion among newcomers, who tend to more naturally focus on improving existing solutions — say on producing a better compression algorithm as in “Silicon Valley” — rather than asking if the effort could at all be fruitful due to intrinsic informational limits.

The strong formalism has been among the key reasons for the many successes of IT, but — some may argue — it has also hindered its applications to a broader set of problems. (Claude Shannon himself famously warned about an excessively liberal use of the theory.) It is not unusual for IT experts to look with suspicion at fields such as Machine Learning (ML) in which the boundaries between possible and impossible are constantly redrawn by advances in algorithm design and computing power.

In fact, a less formal field such as ML allows practice to precede theory, letting the former push the state-of-the-art boundary in the process. As a case in point, deep neural networks, which power countless algorithms and applications, are still hardly understood from a theoretical viewpoint. The same is true for the more recent algorithmic framework of Generative Adversarial Networks (GANs). GANs can generate realistic images of faces, animals and rooms from datasets of related examples, producing fake faces, animals and rooms that cannot be distinguished from their real counterparts. It is expected that soon enough GANs will even be able to generate videos of events that never happenedwatch Françoise Hardy discuss the current US president in the 60’s. While the theory may be lagging behind, these methods are making significant practical contributions.

Interestingly, GANs can be interpreted in terms of information-theoretic quantities (namely the Jensen-Shannon divergence), showing that the gap between the two fields is perhaps not as unbridgeable as it has broadly assumed to be, at least in recent years.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s