The Library

san01

Imagine a library — a real one, with actual books — with an unusual rule: no novels, essays or magazines are allowed, but only private journals, diaries, thoughts, rants, speculation, accusations, and any “true, authentic documents reflecting the real spirit of the people.” The library is open to the public, and each document can be read by any visitor, who can also request for a small fee to be informed about the identity and the address of the author.

This scenario, which is eerily prescient of today’s social media, was imagined by an Italian novelist in 1975, at the height of the Years of Lead. The novelist, Giorgio De Maria, writes in “The twenty days of Turin” that the appeal of the library derived by the prospect of being read by others, ideally creating a social web of connections and relationships.

But the social impact of the library turns out to be quite different from these lofty expectations, as the library ends up fostering a community of paranoid, resentful and isolated prosumers of information. As per Max Weber‘s prediction, in De Maria’s Turin, progress in communication technologies pushes the individual away from public life and into a “subjectivist culture” of “sterile excitation”.

The denouement of the novel sees old ideas and myths, in the form of monuments, come back to life, somehow resuscitated by the energy channeled by the community’s desperation. A bleak vision, ominously close to our present.

Spiking Neural Networks and Neuromorphic Computing

Brain_Chip_Wide.jpgDeep learning techniques have by now achieved unprecedented levels of accuracy in important tasks such as speech translation and image recognition, despite their known failures on properly selected adversarial examples. The operation of deep neural networks can be interpreted as the extraction, across successive layers, of approximate minimal sufficient statistics from the data, with the aim of preserving as much information as possible with respect to the desired output.

A deep neural network encodes a learned task in the synaptic weights between connected neurons. The weights define the transformation between the statistics produced by successive layers. Learning requires updating all the synaptic weights, which typically run in the millions; and inference on a new input, e.g., audio file or image, generally involves computations at all neurons. As a result, the energy required to run a deep neural network is currently incompatible with an implementation on mobile devices.

The economic incentive to offer mobile users applications such as Siri has hence motivated the development in recent years of computation offloading schemes, whereby computation is migrated from mobile devices to remote servers accessed via a wireless interface. Accordingly, user’s data is processed on servers located within the wireless operator’s network rather than on the devices. This reduces energy consumption at the mobiles, while, at the same time, entailing latency — a significant issue for applications such as Augmented Reality — and a potential loss of privacy.

The terminology used to describe deep learning methods — neurons, synapses — reveals the ambition to capture at least some of the brain functionalities via artificial means. But the contrast between the apparent efficiency of the human brain, which operates with five orders of magnitude (100,000 times) less power than current most powerful supercomputers, and the state of the art on neural networks remains jarring.

Current deep learning methods rely on second-generation neurons, which consist of simple static non-linear functions. In contrast, neurons in the human brain are known to communicate by means of sparse spiking processes. As a result, neurons are mostly inactive and energy is consumed sporadically and only in limited areas of the brain at any given time. Third-generation neural networks, or Spiking Neural Networks (SNNs), aim at harnessing the efficiencies of spike-domain processing by building on computing elements that operate on, and exchange, spikes. In an SNN, spiking neurons determine whether to output a spike to the connected neurons based on the incoming spikes.

Neuromorphic hardware is currently being developed that is able to natively implement SNNs. Unlike traditional CPUs or GPUs running deep learning algorithms, processing and communication is not “clocked” to take place across all computing elements at regular intervals. Rather, neuromorphic hardware consists of spiking neurons that are only active in an asynchronous manner whenever excited by input spikes, potentially increasing the energy efficiency by orders of magnitude.

If the promises of neuromorphic hardware and SNNs will be realized and neuromorphic chips will find their place within mobile devices, we could soon see the emergence of revolutionary new applications under enhanced privacy guarantees.

White Light/ White Heat

0508-bks-ferriscvr-master768-v2The latest novel by DeLillo may be about the fear of life and the fear of death and about the role that technology plays in activating and/or defusing both. In a previous novel, a character opined that

This is the whole point of technology. It creates an appetite for immortality on the one hand. It threatens universal extinction on the other.

The key mechanism behind the disruption and distress caused by technology in “Zero K” appears to be virtualization:

Haven’t you felt it? The loss of autonomy. The sense of being virtualized. The devices you use, the ones you carry everywhere, room to room, minute to minute, inescapably.

Virtualization refers to the realization of something — typically an operating system, a server or a network — on a different physical substrate, so that the virtual copy retains the main features (virtues) of the original and is indistinguishable from it. In (my interpretation of) DeLillo’s vision, the virtual copies of our selves stored on digital devices have become more real and relevant than the original.

In “Zero K”, escape, at least for the wealthy, is found in a cryogenically induced isolated state of pure thought after death. This state may be just another form of virtualization, but one that is out of time rather than ticking at the speed of Twitter updates. Waiting for the end of the world to bring better times.

My Generation

4g-map-worldtimezone

As the 3GPP standardization body continues its work towards the specification of the fifth generation (5G) of cellular systems, it is instructive to take a look at the current coverage map for the previous generations.  As of the end of 2016, according to the map above, a number of countries, including Ukraine, Mongolia, Afghanistan, Myanmar, Yemen, Syria and Libya, had only 3G coverage; while others, such as Central African Republic, Chad, Niger and Eritrea were only served by 2G (GSM) operators. Will 5G deepen the chasm between straggling economies and more technologically advanced nations, or will it instead provide shared benefits across the board?

The argument for the first scenario is clear: 5G is mostly envisaged as a platform to connect things, such as vehicles, robots and appliances, hence catering to vertical markets but neglecting the more basic needs of countries with limited broadband connectivity. The second, more optimistic, scenario is instead backed by the idea that “developing” countries could leapfrog previous “Gs” by leveraging novel architectures based on technologies such as wireless backhauling, small-cells and energy harvesting. This would allow them to benefit from the 5G-enabled connectivity among things for applications as diverse as smart transportation systems, e-health (e.g., remote surgery), remote learning and sensor networks for water management and agriculture.

It was suggested that, in addition to the three basic services currently defined for 5G, namely massive Machine Type Communication, enhanced Mobile Broadband, and Ultra Reliable Low Latency Communication,  an Ultra Low Cost Broadband service should be made part of the standard. Apart from isolated efforts by companies, such as Google X’s Project Loon and Facebook’s Aquila project, this laudable idea seems to have been mostly forgotten as of late, although the “frugal 5G” concept recently announced by the IEEE appears to be finally moving in this direction.

Choose Something Like a Star

darkforestIn front pages and news feeds cluttered with instantaneous tweet-sized proclamations and reactions, the announcement earlier this week that astronomers discovered seven Earth-sized planets that might be able to sustain life was difficult to categorize and easy to dismiss. But what if life was indeed found in one of these planets?  How would our public discourse change?

It is likely that a part of the world population would treat this news as another scientific “hoax” unworthy of further consideration, but it is hard to imagine that society as a whole, as well as national and international institutions, would be unaffected. Would there be popular movements advocating for escapist space explorations (perhaps facetiously) or even for an alien takeover with religious undertones? Would politicians be able to continue running on nationalist platforms centered on curbing immigration? Would the international community come together to face the threats, challenges and opportunities posed by unknown life forms (as in “The dark forest“)? Or would instead nations compete for scientific, or possibly colonial, dominance in a repeat of the European experience at the onset of the modern age?

(The title of this post is that of Robert Frost’s poem invoked by Dan Rather as a commentary to the news discussed here. The poem includes the following: “It asks of us a certain height,/ So when at times the mob is swayed/ To carry praise or blame too far,/ We may choose something like a star/ To stay our minds on and be staid.“)

Paying Our Dues

02-james-baldwin-i-am-not-your-negro

The powerful documentary “I am not your negro” by Raoul Peck recently introduced me to the important figure of James Baldwin. By juxtaposing recent events with clips of his interviews and excerpts from his final project, the film provides a striking demonstration of the relevance of Baldwin’s words on the state of racial relations in today’s United States. The reaction of part of the public to Peck’s work further attests to the timeliness of Baldwin’s ideas on this subject. The documentary also offers a glimpse on other, but not all, aspects of this thinking.

While, as Baldwin writes, “When one begins to live by habit and by quotation, one has begun to stop living“, a number of quotes from his writing on culture, progress and information, are worth reproducing here:

— “The paradox of education is precisely this – that as one begins to become conscious, one begins to examine the society in which he is being educated.

— “No people come into possession of a culture without having paid a heavy price for it.

—  “It was books that taught me that the things that tormented me most were the very things that connected me with all the people who were alive, or who had ever been alive.”

— “No one can possibly know what is about to happen: it is happening, each time, for the first time, for the only time.”

Halt and Catch Fire

halt-and-catch-fire

(Halt and catch fire, Season 3, Episode 3)

We are in the late 80s, at a time when the commercial Internet had yet to be born out of the ARPANET and the NSFNET. The setting is a conference room at a small start-up in Silicon Valley that runs a bulletin board system, whereby users can connect via dial-up to exchange messages and trade goods  (all the while being represented as sprites seemingly inspired by Maniac Mansion).

The managers of the start-up are discussing how to improve user experience by finding the right compromise between processing at the users’ computers and at the company’s servers. The ensuing conversation should resonate with today’s engineers and researchers working on the optimal functional allocation between edge and cloud of 5G cellular systems:

— “Okay, up next, how are we doing on the speed of the background graphics? Well, we’ll never get under half a second at 2,400 baud. We’re gonna need to use Huffman, or even better, Lempel-Ziv compressions so we’re not sending all the bits through.

— “Okay, since when are we doing a graduate seminar on information theory? Our guys can’t handle that. No, no, no. That’s smart not sending all the bits through. We preload the most common backgrounds on the diskettes users already have and just send the catalog numbers. Okay, so just send the index to the scene.

— “Okay, that’s good. I see that. That’s good, right? No complex coding. Well, you’ll have a limited set of images and the user will get tired of waiting for the same-old same-old, but, yeah, it’s great if that’s what you guys want to do. Great.

Labeling Reality

199019-neuromancer

Fiction writers have often predicted technological advances and political events, from H. G. Wells to Arthur C. Clarke and George Orwell. Two recent novels anticipate what may be in store at the intersection between virtual reality (VR), the tactile internet and the internet of skills.

In “The peripheral” by William Gibson and in “The three-body problem” by  Liu Cixin, a key part in the plot is played by VR devices featuring interfaces that provide haptic feedback to the user. Using such devices, the user is immersed in the virtual world not only audio-visually but also through the sense of touch. In both cases, the virtual reality turns out (spoiler warning!) to be more authentic than the main characters first envisioned, making their presence and decisions in the virtual world consequential for their actual lives.

In “The peripheral”, the seemingly artificial world is in fact in the actual future — or, more precisely, in an actual future — which can be affected by the actions of the user. Instead, in “The three-body problem”, the virtual reality is a model of an existing planet whose inhabitants are on their way to Earth. VR technology is used in Gibson’s story as a means to project one’s skills and experience to a different time and place, while in Liu’s novel it is leveraged as a tool for propaganda and recruiting.

In both novels, users are, at least at first, not aware of the true nature of the VR “games”, making them even more effective in their respective objectives. In an era in which reality and fiction are ever more intertwined, VR may introduce a new, potentially dangerous, element, further blurring the line between truth and made-up facts and histories.  

Information-Theoretical Principles for Networking and Computing

it_logo_3_final-pngIn applications of information theory to the fields of networking and computing, two main principles seem to be mostly invoked to reduce the communication overhead when uploading/downloading data to/from a number of nodes:

  1. Network coding: If all the other nodes in a group have the information required by any given node in the group, one transmission of a coded packet is sufficient to satisfy all the nodes’ requests. As a result, if each node has a properly selected fraction r of all data, groups of size proportional to r can be created, and the communication rate to upload the information requested by all nodes decreases as 1/r.
  2. MDS coding: If all nodes have a coded fraction r of some data, then downloading information from 1/r of them is sufficient to retrieve the data.

Both principles produce principles similar (r vs. 1/r) trade-offs between what a node “has” via storage, transfer or computing, and the communication overhead for uploading/downloading.

The network coding principle is instrumental in reducing communication overhead in multicast channels, e.g., multi-hop, Ethernet or wireless links, when side information can be transmitted, stored, collected or computed. This has been found useful in applications such as routing, caching and distributed computing (see here and here).

The MDS coding principle, instead, enables the reduction of the number of nodes from which one needs to download data by increasing the amount of data stored or computed at each node. This produces a tension between computing power and latency or reliability for set-ups in which nodes may have random delays or be unreliable. Applications include distributed computing and virtualization for wireless networks.

The two gains can also be combined for applications, such as distributed computing, that involve both multicasting and data collection.

As information theorists tackle fundamental problems in modern networking and computation applications, areas that seem ripe for the discovery of new principles encompass set-ups of relevance for 5G systems including massive number of users and/or ultra-reliability constraints.

Through a Glass Darkly

1200335e2eade4e240423a7546483d8c

As “big data” is making our machine learning algorithms smarter, the wealth of online information appears to have an opposite effect on society, engendering incredulity and parochialism. The two phenomena may not be as contradictory as they would seem at first glance, as they both stem from biases caused by simplified models of reality.

Machine learning methods train mathematical models in the hope of discovering regularities that may be translated into decisions in the real world. Learning is made possible by the fact that the machines are taught to look at the data through the lens of a specific model with limited representation capabilities. Indeed, the “no free lunch” theorem ensures that no learning is possible without making prior restrictive modeling assumptions: An inductive bias is necessary to learn [1]. This approach has been extremely successful at identifying statistically significant patterns for tasks that permit the collections of large data sets, such as speech or image recognition and translation.

But the choice of the right model is crucial, and this is often painfully clear when machine learning interface with real-world decision and policy making. In fact, the bias caused by the model’s underlying assumptions, which may be hidden from the users of the algorithms, has frequently yielded nefarious consequences, e.g., when assessing credit worthiness or evaluating teachers’ performance [2]. A particularly sticky problem is the inability of certain models to account for bias inherent in the selection of a given data set. This was reported, for instance, in the Beauty.AI pageant, in which the algorithm appeared to be discriminating against minorities.

The same phenomenon seems to be at work in the creation of today’s society of isolated, and yet connected, personal models of the world, as the wealth of available information makes it possible to select sources that only reinforce one’s beliefs. Without a feedback mechanism grounded on facts (not “alternative” ones), these personal, simplified and biased, models of reality foster, and are in turn supported by, irrational urges [3].

Politics appears to have played an important role in the public distrust of established conventional sources of information and in the specialization of isolated antagonistic models. In fact, as argued in [4], in the face of an ever more complex reality, today’s nations, via their politicians, have retreated into simplified models of the world that contrast good and evil players and put absolute faith into the power of financial markets. The continuous distortion of reality necessary to make events conform to such models has created widespread wariness for information that comes from outside one’s own views of how the “machine” works.

The over-reliance on biased models trained on selected data for decision making in the political and personal spheres is an important threat to the survival of peaceful and democratic societies that no technological advance can counteract — not even the “supernova” of [5]. To put it as in [3]:

… to get our basic bearings we need, above all, greater precision in matters of the soul. The stunning events of our age of anger, and our perplexity before them, make it imperative that we anchor thought in the sphere of emotions; these upheavals demand nothing less than a radically enlarged understanding of what it means for human beings to pursue the contradictory ideals of freedom, equality and prosperity.

References

[1] S. Shalev-Shwartz and S. Ben-David, Understanding machine learning: From theory to algorithms, Cambridge university press, 2014.

[2] C. O’Neil, Weapons of math destruction: How big data increases inequality and threatens democracy, Crown Publishing Group (NY); 2016 Sep 6.

[3] P. Mishra, Welcome to the age of anger, The Guardian, Dec. 8, 2016.

[4] A. Curtis, HyperNormalisation, BBC.

[5] T. Friedman, Thank you for being late, Farrar, Straus and Giroux, 2016.