Mind-Body Computing

Ahttps://i0.wp.com/colah.github.io/posts/2015-01-Visualizing-Representations/img/wiki-pic-major.pngs any quantitative researcher knows all too well, visualizing data facilitates interpretation and extrapolation, and can be a powerful tool for solving problems and motivating decision and actions. Visualization allows one to leverage our intuitive sense of space in order to grasp connections and relationships, as well as to notice parallels and analogies. (It can, of course, also be used to confuse).

Modern machine learning algorithms operate on very high dimensional data structures that cannot be directly visualized by humans. In this sense, machines can “see”, and “think”, in spaces that are inaccessible to our eyes. To put it with Richard Hamming’s words:

“Just as there are odors that dogs can smell and we cannot, as well as sounds that dogs can hear and we cannot, so too there are wavelengths of light we cannot see and flavors we cannot taste. Why then, given our brains wired the way they are, does the remark ‘Perhaps there are thoughts we cannot think’, surprise you?”

In an era of smart homes, smart cities, and smart governments, methods that visualize high-dimensional data in two dimensions can allow us to bridge, albeit partially, the understanding gap between humans and algorithms. This explains the success of techniques such as t-SNE, especially when coupled with interactive graphical interfaces.

As virtual reality and mixed technologies vie with standard two-dimensional interfaces as the dominant medium between us and the machines, data visualization and interactive representation stand to gain another dimension. And the difference may not be merely a quantitative one. As suggested by Jaron Lanier:

“People think differently when they express themselves physically. […] Having a piano in front of me makes me smarter by applying the biggest part of my cortex, the part associated with haptics. […] Don’t just think of VR as the place where you can look at a molecule in 3-D, or perhaps handle one, like all those psychiatrists in Freud avatars. No! VR is the place where you become a molecule. Where you learn to think like a molecule. Your brain is waiting for the chance.”

As such, VR may allow humans “to explore motor cortex intelligence”. Can this result in a new wave of innovations and discoveries?


Common Sense and Smart Appliances


At CES 2018, most new consumer products, such as smart appliances and self-driving cars, will sport the label “AI”. As the New York Times puts it: “the real star is artificial intelligence, the culmination of software, algorithms and sensors working together to make your everyday appliances smarter and more automated”. Given the economic and intellectual clout of the acronym AI, it is worth pausing to question its significance, and to reflect on the possible implications of its inflated use in the tech industry and in the academic world.

In 1978, John McCarthy quipped that producing a human-level, or general, AI would require “1.7 Einsteins, 2 Maxwells, 5 Faradays and .3 Manhattan Projects.” Forty years later, despite many predictions of an upcoming age of superintelligent machines, little progress has been done towards the goal of a general AI.  The AI powering the new consumer devices is in fact mostly deep learning, a specialized learning technique that performs pattern recognition by interpolating across a large number of data points.

Twenty years earlier, in 1959, in his seminal paper “Programs with Common Sense“, McCarthy associated human-like intelligence with the capability to deduce consequences by extrapolating from experience and knowledge even in the  presence of previously unforeseen circumstances. This is in stark contrast with the interpolation of observations to closely related contingencies allowed by deep learning. According to this view, intelligent thinking is the application of common sense to long-tail phenomena, that is, to previously unobserved events that are outside the “manifold” spanned by the available data.

This generalized form of intelligence appears to be strongly related to the ability to acquire and process information through language. A machine with common sense should be able to answers queries such as “If you stick a pin into a carrot, does it make a hole in the carrot or in the pin?“, without having to rely on many joint observations of carrots and pins. To use Hector Levesque‘s distinction, if state-of-the-art machine learning techniques acquire street smarts by learning from repeated observations, general AI requires the book smarts necessary to learn from written or spoken language.  Language is indeed widely considered to loom as the the next big frontier of AI.

As most of the academic talent in AI is recruited by the Big Five (Amazon, Apple, Google, Facebook and Microsoft),  the economic incentives for the development of general AI seem insufficient to meet the level of effort posited by McCarthy. And so, rather than worrying about the take-over of a super-AI, given the dominance of the current state-of-the-art specialized AI, what we should “be most concerned about is the possibility of computer systems that are less than fully intelligent, but are nonetheless considered to be intelligent enough to be given the authority to control machines and make decisions on their own. The true danger […] is with systems without common sense making decisions where common sense is needed.” (Hector Levesque, “Common Sense, the Turing Test, and the Quest for Real AI“).

The Rise of Hybrid Digital-Analog

Asautonomous_design-by-will-staehle a keen observer of nature, Leonardo da Vinci was more comfortable with geometry than with arithmetic. Shapes, being continuous quantities, were easier to fit, and disappear into, the observable world than discrete, discontinuous, numbers. For centuries since Leonardo, physics has shared his preference for analog thinking, building on calculus to describe macroscopic phenomena. The analog paradigm was upended at the beginning of the last century, when the quantum revolution revealed that the microscopic world behaves digitally, with observable quantities taking only discrete values. Quantum physics is, however, at heart a hybrid analog-digital theory, as it requires the presence of analog hidden variables to model the digital observations.

Computing technology appears to be following a similar path. The state-of-the-art computer that Claude Shannon found in Vannevar Bush‘s lab at MIT in the thirties was analog: turning its wheels would set the parameters of a differential equation to be solved by the computer via integration. Shannon’s thesis and the invention of the transistor ushered in the era of digital computing and the information age, relegating analog computing to little more than a historical curiosity.

But analog computing retains important advantages over digital machines. Analog computers can be faster in carrying our specialized tasks. As an example, deep neural networks, which have led to the well-publicized breakthroughs in pattern recognition, reinforcement learning, and data generation tasks, are inherently analog (although they are currently mostly implemented on digital platforms). Furthermore, while the reliance of digital computing on either-or choices can provide a higher accuracy, it can also also yield catastrophic failures. In contrast, the lower accuracy of analog systems is accompanied by a gradual performance loss in case of errors. Finally, analog computers can leverage time, not just as a neutral substrate for computation as in digital machines, but as an additional information-carrying dimension. The resulting space-time computing has the potential to reduce the energetic and spatial footprint of information processing.

The outlined complementarity of analog and digital computing has led experts to predict that hybrid digital-analog computers will be the way of the future.  Even in the eighties, Terrence J. Sejnowski is reported to have said:  ”I suspect the computers used in the future will be hybrid designs, incorporating analog and digital.” This conjecture is supported by our current understanding of the operation of biological neurons, which communicate using the digital language of spikes, but maintain internal analog states in the form of membrane potentials.

With the emergence of academic and commercial neuromorphic processors, the rise of hybrid digital-analog computing may just be around the corner. As it is often the case, the trend has been anticipated by fiction. In Autonomous, robots have a digital main logic unit with a human brain as a coprocessor to interpret people’s reactions and emotions. Analog elements can support common sense and humanity, in contrast to digital AI that “can make a perfect chess move while the room is on fire.” For instance, in H(a)ppy and Gnomon, analog is an element of disruption and reason in an ideally ordered and purified world under constant digital surveillance.

(Update: Here is a recent relevant article.)

When Message and Meaning are One and the Same

Embassytown2.pngThe indigenous creatures of Embassytown — an outpost of the human diaspora somewhere/somewhen in the space-time imagined by China Miéville — communicate via the Language. Despite requiring two coordinated sound sources to be spoken, the Language does not have the capacity to express any duplicitous thought: Every message, in order to be perceived as part of the Language, must correspond to a physical reality. A spoken message is hence merely a link to an existing object, and it ceases being recognized as a message when the linked object is no longer in existence.

As Miéville describes it: “… each word of Language, sound isomorphic with some Real: not a thought, not really, only self-expressed worldness […] Language had always been redundant: it had only ever been the world.

The Language upends Shannon’s premise that the semantic aspects of communication are irrelevant to the problem of transferring and storing information. In the Language, recorded sounds, untied to the state of the mind that produced them, do not carry any information. In a reversal of Shannon’s framework, information is thus inextricably linked to its meaning, and preserving information requires the maintenance of the physical object that embodies its semantics.

When message and meaning are one and the same as in the Language, information cannot be represented in any format other than in its original expression; Shannon’s information theory ceases to be applicable; and information becomes analog, irreproducible, and intrinsically physical. (And, as the events in the novel show, interactions with the human language may lead to some dramatic unforeseen consequences.)

A Brief Introduction to Machine Learning for Engineers

Having taught courses on machine learning, I am often asked by colleagues and students with a background in engineering to suggest “the best place to start” to get into this subject. I typically respond with a list of books — for a general, but slightly outdated introduction, read this book; for a detailed survey of methods based on probabilistic models, check this other reference; to learn about statistical learning, I found this text useful; and so on. This answers strikes me, and most likely also my interlocutors, as quite unsatisfactory. This is especially so since the size of many of these books may be discouraging for busy professionals and students working on other projects. These notes are my first attempt to offer a basic and compact reference that describes key ideas and principles in simple terms and within a unified treatment, encompassing also more recent developments and pointers to the literature for further study. This is a work in progress and feedback is very welcome! (MIT Technology Review link)

Human In the Loop

figsimpson11-e1502302493173.jpgFeed the data on the left (adapted from this book by Pearl and co-authors) to a learning machine. With confidence, the trained algorithm will predict lower cholesterol levels for individuals who exercise less. While counter-intuitive, the prediction is sound and supported by the available data. Nonetheless, no one could in good faith use the output of this algorithm as a prescription to reduce the number of hours at the gym.

This is clearly a case of correlation being distinct from causation. But how do we know? And how can we ensure that an AI Doctor would not interpret the data incorrectly and produce a harmful diagnosis?

FigSimpson1We know because we have prior information on the problem domain. Thanks to our past experience, we can explain away this spurious correlation by including another measurable variable in the model, namely age. To see this, consider the same data, now redrawn by highlighting the age of the individual corresponding to each data point. The resulting figure, shown on the right, reveals that older people — within the observed bracket —  tend to have a higher cholesterol as well as to exercise more: Age is a common cause of both exercise and cholesterol levels. In order to capture the causality relationship between the latter variables, we hence need to adjust for age. Doing this requires to consider the trend within each age separately, recovering the expected conclusion that exercising is useful to lower one’s cholesterol.

And yet an AI Doctor that is given only the data set in the first figure would have no way of deciding that the observed upward trend hides a spurious correlation through another variable. More generally, just like the AI Doctor blinded by a wrong model, AI algorithms used for hiring, rating teachers’ performance or credit assessment can confuse causation for correlation and produce biased, or even discriminatory, decisions.

As seen, solving  this problem would require making modeling choices, identifying relevant variables and their relationships — a task that appears to require a human in the loop. Add this to the, still rather short, list of new jobs created by the introduction of AI and machine learning technologies in the workplace.



A Few Things I Didn’t Know About Claude Shannon

Claude SHANNON, US mathematician. 1962

  • While he was a student at MIT, Claude Shannon, the future father of Information Theory, trained as an aircraft pilot in his spare time (to the protestations of the instructor, who was worried about damaging such a promising brain).
  • What do Coco Chanel, Truman Capote, Albert Camus, Gandhi, Malcolm X and Claude Shannon have in common? They were all photographed by Henri Cartier-Bresson (see photo).
  • Having pioneered artificial intelligence research with his maze-solving mouse and his chess-playing machine, in 1984 Shannon proposed the following targets for 2001: 1) Beat the chess word champion (check); 2) Generate a poem accepted for publication by the New Yorker (work in progress); 3) Prove the Riemann hypothesis (work in progress); 4) Pick stocks outperforming the prime rate by 50% (check, although perhaps with some delay).
  • Shannon corresponded with L. Ron Hubbard of Scientology fame, writing about him that he “has been doing very interesting work lately in using a modified hypnotic technique for therapeutic purposes”, although he later conceded that he did not know “whether or not his treatment contains anything of value”.
  • He is quoted as saying that great insights spring from a “constructive dissatisfaction”, that is, “a slight irritation when things don’t quite look right”.

(From “A Mind at Play“, an excellent book about Claude Shannon by Jimmy Soni and Rob Goodman.)

Impossible Lines

deep_face_1000In a formal field such as Information Theory (IT), the boundary between possible and impossible is well delineated: given a problem, the optimality of a solution can be in principle checked and determined unambiguously. As a pertinent example, IT says that there are ways to compress an “information source”, say a class of images, up to some file size, and that no conceivable solution could do any better than the theoretical limit. This is often a cause of confusion among newcomers, who tend to more naturally focus on improving existing solutions — say on producing a better compression algorithm as in “Silicon Valley” — rather than asking if the effort could at all be fruitful due to intrinsic informational limits.

The strong formalism has been among the key reasons for the many successes of IT, but — some may argue — it has also hindered its applications to a broader set of problems. (Claude Shannon himself famously warned about an excessively liberal use of the theory.) It is not unusual for IT experts to look with suspicion at fields such as Machine Learning (ML) in which the boundaries between possible and impossible are constantly redrawn by advances in algorithm design and computing power.

In fact, a less formal field such as ML allows practice to precede theory, letting the former push the state-of-the-art boundary in the process. As a case in point, deep neural networks, which power countless algorithms and applications, are still hardly understood from a theoretical viewpoint. The same is true for the more recent algorithmic framework of Generative Adversarial Networks (GANs). GANs can generate realistic images of faces, animals and rooms from datasets of related examples, producing fake faces, animals and rooms that cannot be distinguished from their real counterparts. It is expected that soon enough GANs will even be able to generate videos of events that never happenedwatch Françoise Hardy discuss the current US president in the 60’s. While the theory may be lagging behind, these methods are making significant practical contributions.

Interestingly, GANs can be interpreted in terms of information-theoretic quantities (namely the Jensen-Shannon divergence), showing that the gap between the two fields is perhaps not as unbridgeable as it has broadly assumed to be, at least in recent years.