Turing, London, and Information Theory

In this one-page contribution to the first London Symposium on Information Theory, Alan Turing discusses learning as opposed to programming, the role of computational complexity in information theory, and genetic algorithms — in 1950.

(with thanks to Deniz Gündüz)

Advertisements

The Rise of Hybrid Digital-Analog

Asautonomous_design-by-will-staehle a keen observer of nature, Leonardo da Vinci was more comfortable with geometry than with arithmetic. Shapes, being continuous quantities, were easier to fit, and disappear into, the observable world than discrete, discontinuous, numbers. For centuries since Leonardo, physics has shared his preference for analog thinking, building on calculus to describe macroscopic phenomena. The analog paradigm was upended at the beginning of the last century, when the quantum revolution revealed that the microscopic world behaves digitally, with observable quantities taking only discrete values. Quantum physics is, however, at heart a hybrid analog-digital theory, as it requires the presence of analog hidden variables to model the digital observations.

Computing technology appears to be following a similar path. The state-of-the-art computer that Claude Shannon found in Vannevar Bush‘s lab at MIT in the thirties was analog: turning its wheels would set the parameters of a differential equation to be solved by the computer via integration. Shannon’s thesis and the invention of the transistor ushered in the era of digital computing and the information age, relegating analog computing to little more than a historical curiosity.

But analog computing retains important advantages over digital machines. Analog computers can be faster in carrying our specialized tasks. As an example, deep neural networks, which have led to the well-publicized breakthroughs in pattern recognition, reinforcement learning, and data generation tasks, are inherently analog (although they are currently mostly implemented on digital platforms). Furthermore, while the reliance of digital computing on either-or choices can provide a higher accuracy, it can also also yield catastrophic failures. In contrast, the lower accuracy of analog systems is accompanied by a gradual performance loss in case of errors. Finally, analog computers can leverage time, not just as a neutral substrate for computation as in digital machines, but as an additional information-carrying dimension. The resulting space-time computing has the potential to reduce the energetic and spatial footprint of information processing.

The outlined complementarity of analog and digital computing has led experts to predict that hybrid digital-analog computers will be the way of the future.  Even in the eighties, Terrence J. Sejnowski is reported to have said:  ”I suspect the computers used in the future will be hybrid designs, incorporating analog and digital.” This conjecture is supported by our current understanding of the operation of biological neurons, which communicate using the digital language of spikes, but maintain internal analog states in the form of membrane potentials.

With the emergence of academic and commercial neuromorphic processors, the rise of hybrid digital-analog computing may just be around the corner. As it is often the case, the trend has been anticipated by fiction. In Autonomous, robots have a digital main logic unit with a human brain as a coprocessor to interpret people’s reactions and emotions. Analog elements can support common sense and humanity, in contrast to digital AI that “can make a perfect chess move while the room is on fire.” For instance, in H(a)ppy and Gnomon, analog is an element of disruption and reason in an ideally ordered and purified world under constant digital surveillance.

When Message and Meaning are One and the Same

Embassytown2.pngThe indigenous creatures of Embassytown — an outpost of the human diaspora somewhere/somewhen in the space-time imagined by China Miéville — communicate via the Language. Despite requiring two coordinated sound sources to be spoken, the Language does not have the capacity to express any duplicitous thought: Every message, in order to be perceived as part of the Language, must correspond to a physical reality. A spoken message is hence merely a link to an existing object, and it ceases being recognized as a message when the linked object is no longer in existence.

As Miéville describes it: “… each word of Language, sound isomorphic with some Real: not a thought, not really, only self-expressed worldness […] Language had always been redundant: it had only ever been the world.

The Language upends Shannon’s premise that the semantic aspects of communication are irrelevant to the problem of transferring and storing information. In the Language, recorded sounds, untied to the state of the mind that produced them, do not carry any information. In a reversal of Shannon’s framework, information is thus inextricably linked to its meaning, and preserving information requires the maintenance of the physical object that embodies its semantics.

When message and meaning are one and the same as in the Language, information cannot be represented in any format other than in its original expression; Shannon’s information theory ceases to be applicable; and information becomes analog, irreproducible, and intrinsically physical. (And, as the events in the novel show, interactions with the human language may lead to some dramatic unforeseen consequences.)

A Few Things I Didn’t Know About Claude Shannon

Claude SHANNON, US mathematician. 1962

  • While he was a student at MIT, Claude Shannon, the future father of Information Theory, trained as an aircraft pilot in his spare time (to the protestations of the instructor, who was worried about damaging such a promising brain).
  • What do Coco Chanel, Truman Capote, Albert Camus, Gandhi, Malcolm X and Claude Shannon have in common? They were all photographed by Henri Cartier-Bresson (see photo).
  • Having pioneered artificial intelligence research with his maze-solving mouse and his chess-playing machine, in 1984 Shannon proposed the following targets for 2001: 1) Beat the chess word champion (check); 2) Generate a poem accepted for publication by the New Yorker (work in progress); 3) Prove the Riemann hypothesis (work in progress); 4) Pick stocks outperforming the prime rate by 50% (check, although perhaps with some delay).
  • Shannon corresponded with L. Ron Hubbard of Scientology fame, writing about him that he “has been doing very interesting work lately in using a modified hypnotic technique for therapeutic purposes”, although he later conceded that he did not know “whether or not his treatment contains anything of value”.
  • He is quoted as saying that great insights spring from a “constructive dissatisfaction”, that is, “a slight irritation when things don’t quite look right”.

(From “A Mind at Play“, an excellent book about Claude Shannon by Jimmy Soni and Rob Goodman.)

The Network & the Network

Full Narrative Timeline

In “The City & the City“, China Miéville imagines an usual coexistence arrangement between two cities located in the same geographical area that provides a surprisingly apt metaphor for the concept of network slicing in 5G networks — from the city & the city to the network & the network.

The two cities: Besźel and Ul Qoma occupy the same physical location, with buildings, squares, streets and parks either allocated completely to one city or “crosshatched”, that is, shared. The separation and isolation between the two cities is not ensured by physical borders, but is rather enforced by cultural customs and legal norms. The inhabitants of each city are taught from childhood to “unsee” anything that lies in the other city, consciously ignoring people, cars and buildings, even though they share the same sidewalks, roads and city blocks. Recognition of “alter” areas and citizens is made possible by the different architectures, language and clothing styles adopted in the two cities. Breaching the logical divide between Besźel and Ul Qoma by entering areas or interacting with denizens of the other city is a serious crime dealt with by a special police force. (Prospective tourists in Besźel or Ul Qoma are required to attend a long preliminary course to learn how to “unsee”.)

And now for the two networks: Experts predict an upcoming upheaval in telecommunication networks to parallel the recent revolution in computing brought on by cloudification. Just as computing and storage have become readily available on demand to individuals, companies and governments on shared cloud platforms, network slicing technologies are expected to enable the on-demand instantiation of wireless services on a common network substrate. Networking and wireless access for, say, a start-up offering IoT or vehicular communication applications, could be quickly set up on the hardware and spectrum managed by an infrastructure provider. Each service would run its own network on the same physical infrastructure but on logically separated slices — the packets and signals of one slice “unseeing” those of the other. In keeping with the metaphor, ensuring the isolation and security of the coexisting slices is among the key challenges facing this potentially revolutionary technology.

 

The Rebirth of Expertise?

BaltesThese days, conversations on almost any topic — be it finance, health care, art, the economy, music, or even religion — do not seem complete without a lively, and more or less informed, exchange on AI and on machine learning. The crux of the discussion typically rests on the role of humans in the increasingly large number of enterprises that depend on machines for decision making and manufacturing. In this context, a distinction that may prove useful in thinking about a future society of humans and “intelligent” machines is that proposed back in the 60s in the field of psychology between fluid and crystallized intelligence. As recently pointed out by Sarah Harper, taken to its logical end point, this idea may yield some possibly counter-intuitive conclusions regarding the parts to be played by AI and by different generations in the workplace.

Fluid intelligence relates to the ability to solve new problems by applying well-defined logical rules, such as by means of inductive or deductive reasoning. Fluid intelligence does not depend on any external prior knowledge about the world and the problem domain. In contrast, crystallized intelligence is the capacity to build on one’s experience and knowledge to acquire new skills and to solve problems.

In humans, fluid intelligence tends to decrease with age, while crystallized intelligence follows an inverse trend, peaking much later in life. Machines appear to have surpassed humans in terms of fluid intelligence, given their unprecedented capability to recognize patterns in large volumes of data and to optimize actions over long time horizons. But building general-purpose skills based on expertise in a computer, that is, generating artificial crystallized intelligence, is broadly considered to be unattainable with current AI techniques (listen to Obama’s eloquent explanation of this point!). Current state-of-the-art machine learning methods in fact cannot even explain why they output given decisions.

So there you have it — in a system that can leverage the fluid intelligence of sophisticated AI tools, the crystallized intelligence borne out of the experience of older women or men may become more valuable than the speed and flexibility of fresh graduates. Considering the predictions of an increased lifespan, this sounds like good news — can it be that expertise is not dead  after all?

Net Neutrality vs Net Vitality (and 5G)

simpledesktops.com.pngA prime example of the complex relationship between digital technologies and the legal system is the fluidity and geographical variance of the laws that regulate broadband access. The discussion is typically framed — as far as I can tell from my outsider’s perspective — around two absolute principles, namely network neutrality and networks vitality. The net neutrality and net vitality camps, at least in their purest expressions, often seem uninterested in hearing each other’s arguments. This tends to hide from public discussion the layered technological, economic, moral and legal aspects that underlie the delicate balance between access and economic incentives that is at the core of the issue. And things appear to be getting even more involved with the advent of 5G.

Net neutrality is — for purists — the principle that all bits are created equal. Accordingly, broadband access providers should not be allowed to “throttle” packets on the basis, for instance, of their application (e.g., BitTorrent) or their origin (as determined by the IP address). The network should be “dumb” and only convey bits from two ends of a communication session. Regulation that upholds net neutrality rules is in place in many countries around the word, including in the EU and the US. Under the previous US administration, the FCC reclassified broadband Internet access as a “common carrier”, that is, as a public utility, in 2015, allowing the enforcement of net neutrality rules. Under the new administration, this decision now appears likely to be reversed.

The counterarguments to net neutrality typically center around some notion of net vitality, which refers broadly to the dynamism of the broadband Internet ecosystem, particularly as it pertains investment and growth. The term was coined in a report by the Media Institute, where a quantitative index was proposed as a compound measure of the net vitality of a country in terms of applications and content (e.g., access, e-government, social network penetration, app development),  devices (e.g., smart phone penetration and sales), networks (e.g., cybersecurity, investment, broadband  prices), and macroeconomic factors (e.g., number and evaluation of start-ups).

Net neutrality purists — not all advocates fall in this category — believe that allowing broadband access providers to discriminate on the basis of a packet’s identity would pose a threat to freedom of expression and competition. Without net neutrality rules, telecom operators could in fact block competitors’ services, and also favor deep-pocketed internet companies, such as the Frightful Five (Alphabet, Amazon, Apple, Facebook and Microsoft), that can outspend start-ups for faster access. A case in point is the ban of Google Wallet by Verizon Wireless, AT&T, and T-Mobile to promote their competing Isis (!) mobile payment system.

The net vitality camp, headed by broadband access providers and economists, deems net neutrality rules to be an impediment to investment and growth. As claimed in a 2016 manifesto by European telecom operators, only by charging more for better service can sufficient revenue be raised by broadband access providers to fund new infrastructure and services.

Digging a little deeper, one finds that the issue is more complex than implied by the arguments of the two camps. To start, some discrimination among the bits carried by the network may in fact serve a useful purpose. For instance, by letting some packets be transported for free, telecom operators can offer zero-cost Internet access to the poorest communities in the developing world  as in the Facebook Zero and Google Free Zone projects. And packet prioritization is in fact already implemented in LTE networks as a necessary means to ensure call quality for Voice over LTE (VoLTE is not considered to be a broadband Internet service and hence not subject to net neutrality regulations).

That net neutrality is a more subtle requirement that the “every bit is created equal” mantra is in fact well recognized by many net neutrality advocates. When making the case for net neutrality rules, the then-president Obama called for “no blocking, no throttling, no special treatment at interconnections, and no paid prioritization to speed content transmission”, hence stopping short of prescribing full bit equivalence. Tim Berners-Lee and Eric Shmidt have also voiced similar opinions.

The planned transition to 5G systems is bound to add a further layer of complexity to the relationship between net neutrality and net vitality. 5G networks are indeed expected not only to provide broadband access, but also to serve vertical industries through the deployment of ultra-reliable and low-latency communication services. In this context, it seems apparent that bits carrying information about, say, a remote surgery or the control of a vehicle, should not be treated in the same way as bits encoding an email.

As the example of VoLTE shows, a general solution may lie in isolating mobile broadband services, on which strong net neutrality guarantees can be enforced, from other types of traffic, such as ultra-reliable and machine-type communications, on which traffic differentiation may be allowed. The feasibility of this approach is reinforced by the fact that isolation is a central feature of network slicing, a technology that will allow operators of 5G to create virtual networks that are fine-tuned for specific applications.

 

In the Loop (On Technology and Politics)

9193ghdcixl

The feedback loop between politics and technology appears to be one of the most potent motors of human history, its pace quickening with each innovation cycle.

In the antiquity, the Greek city states and the Persian and Roman Empires created the necessary conditions and incentives for mathematicians and engineers to pursue specific technologies, such as aqueducts and catapults. Later, the feedback loop began closing, with technological advances directly informing the political system — in Marx’s words:

“The hand-mill gives you society with the feudal lord; the steam-mill, society with the industrial capitalist.”

Fast-forwarding to today, much has been written about the way in which new communication tools have enabled the resurgence of popular and populist movements in the Middle East, Europe and the USA. It is also often argued that it is politics that is bound to play catch-up in this process, with technological firms leading the way towards decentralization and acceleration.

But, in light of recent political developments around immigration, the feedback loop appears to be promptly working its way in the reverse direction. Politics, in fact, seem bound to dislodge the barycenter of technological innovation from its current established poles. Technology has thrived in environments characterized by diversity and mobility. Financial tech firms have found fertile ground in London and Internet companies have taken over Silicon Valley. But the threat, even if not yet directly enforced, of new walls and restrictions in the UK and USA is pushing gifted PhD students and Post-Docs to choose alternative destinations, such as Canada, New Zealand, Mexico and Singapore.

Looking further ahead in the future, to update Marx’s quote, 3D printing may give us a post-scarcity and post-work society. As imagined by Cory Doctorow in “Walkaway“, if tools such as 3D printers fulfill their promises, objects and foodstuffs will be just a few clicks away, produced with close to zero cost by repurposing unused materials and by transforming bits into things. In an economy of abundance, there would be no need to keep jobs, maintain a currency, protect private property or support the current education system. This would strip the nation state of its main functions and usher in new ways of organizing societies — and, inevitably, new technologies.

Demain

Tomorrow.jpg

Having read yet another discouraging article on the state of our planet, a group of French filmmakers embarked on an optimistic globe-trotting quest for climate change solutions that took them to the UK, USA, Denmark, France, Switzerland, Finland and India. The result is a 2015 documentary entitled “Tomorrow” that is now playing in US theaters. The movie is meant as an antidote to the fatalism that can stem from familiarity with the scientific consensus on global warming.

Shot at a time in which prospects were not as dire as they appear today after the latest policy shifts in the United States, the documentary finds hope in innovative sustainable approaches to agriculture, energy production, finance, democracy and education. A common underlying element of  all the surveyed solutions is their reliance on social, local and decentralized mechanisms. Including the inevitable interview with Vandana Shiva and an expected visit to a Finnish elementary school, the film uncovers a heartening set of initiatives, such as permaculture and the adoption of local currencies alongside conventional government-backed money.

Conspicuously missing from any of the solutions discussed in the movie are information and communication technologies (excluding the fleeting appearance of a smartphone used to pay in a local currency). This may not come as a surprise given the measurable decrease in “closeness, connection, and conversation quality” among people in local communities that has resulted from the widespread adoption of mobile phones. A city of the future ruled by smart devices seems indeed destined to be a lonely place, incompatible with the development of meaningful social programs. If we also factor in the energy footprint of producing digital devices and running telecom networks, the case for considering communication technology as a contributor to climate change appears to be well motivated.

Nonetheless, communication technology has a potentially important role to play in combating global warming. Smart phones are already used for emergency preparedness and coordination to respond to the effects of a changing climate. And, as discussed in a recent report of the Brookings Institution, the upcoming fifth generation of wireless networks may prove to be a significant asset in key areas such as water management, air quality control, energy production, transportation, and building design.

Take water management. It has been reported that two thirds of the world’s population may face water shortages by 2025. Thanks to Internet-of-Things (IoT)-enabled sensor and actuator networks, the efficiency in the use of this scarce natural resource may be improved via monitoring (e.g., of the concentration of dangerous chemicals), leakage detection, measuring home usage, adaptive irrigation tailored to measured moisture levels, and smart chillers for industries.

Leveraging the benefits of connectedness for a more efficient use of natural resources while at the same time building more resilient and sustainable communities may prove a delicate balancing act, but one that could prove critical for the future of our planet.

Internet of Thoughts

1999_1Brain-computer interfaces have been a staple of cyberpunk plots for decades. They have also been the subject of serious scientific research since the 70s, leading to impressive recent prototypes that allow humans to remotely control artificial limbs through their thoughts.

In the last few days, two headline-grabbing announcements appear to presage an era in which brain-computer interfaces will be standard components in commercial communication devices. First, it was reported that Elon Musk  launched a company that will invest in research to make tools “that may one day upload and download thoughts”. Other companies are also in the works that have similar goals. And then Facebook revealed its plans to develop technology that would make it possible for users to compose outgoing messages via their thoughts and to “feel” incoming messages without reading them.

In this plausible scenario in which brain-computer interfaces are integrated into communication devices, humans will be able to literally communicate through their minds. We would therefore experience a transition from the upcoming Internet of Things to a next generations of communication networks supporting an Internet of Thoughts.

This idea was fictionalized in “Lock In“, a 2014 novel by John Scalzi. In Scalzi’s world, massive funds are allocated by the government on brain-computer interface research in the aftermath of an epidemic that left millions of Americans locked in, that is, unable to move and communicate. The plot revolves around the fact that networked elements are prone to hacking given their reliance on software (not unlike “Ghost in the Shell“). Hacking a brain, it turns out, may have quite unpleasant consequences — and not only for the hacked.

It has been reported that even the most experienced software programmers have a rate of error of 0.05%, so that, on average, programs have an error every 2,000 lines. This implies that there are thousands of bugs in a typical modern application, since, for instance, Android has 12 million lines of codes. The fact that only one error may be enough to compromise the security of a system highlights the significant challenges of securing a software-based networks from hacking.

The ongoing softwarization of everything from computing to telecommunication networks may well one day (perhaps not too soon) extend to our minds. One can only hope that breakthroughs in software security will outpace threats from hackers as this process unfolds.