Surveillance Capitalism

Definition: 1.  A new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales; 2. A parasitic economic logic in which the production of goods and services is subordinated to a new global architecture of behavioral modification; 3. A rogue mutation of capitalism marked by concentration of wealth, knowledge, and power unprecedented in human history; 4. The foundational framework of a surveillance economy; 5. As significant a threat to human nature in the twenty-first century as industrial capitalism was to the natural world in the nineteenth and twentieth; 6. The origin of a new instrumentarian power that asserts dominance over society and presents startling challenges to market democracy; 7. A movement that aims to impose a new collective order based on total certainty; 8. An expropriation of critical human rights that is best understood as a coup from above: an overthrow of people’s sovereignty.

(from “The age of surveillance capitalism” by Shoshana Zuboff)


Start Making Sense: Semantic Filtering and Control for Post-5G Connectivity

How to tame and make efficient use the flood of information produced by humans and things connected to future wireless networks? Here is a brief position paper, co-authored with Petar Popovski (Aalborg University), that proposes the concept of semantic filtering and control for post-5G connectivity.

Future Politics

Image result for populousThe currency and lifeblood of politics is information, and the way we collect, organize, and process information is undergoing profound changes — what does this mean for our political systems?

Many events of the last two years, including the recent protests in France and the current impasse over Brexit in the UK,  point to a future politics of right- and left-wing populism, to social fragmentation, to an increasing power of the state and of tech companies in terms of force, scrutiny, and perception-control, and to a breakdown of the liberal democratic system that was once famously hailed as the end of history.

in Future Politics, Jamie Susskind offers some hope that “a new and more robust form of democracy” may instead slowly emerge, supercharged by digital technologies and guided by the “vigilance, prudence, curiosity, persistence, assertiveness, and public-spiritedness” of this and of future generations:

The solution, I hope, will be […] one that combines the most promising elements of Deliberative Democracy, Direct Democracy, Wiki Democracy, Data Democracy, and AI Democracy.”

Whether or not one shares Susskind’s well-argued and informed outlook (the book is highly recommended), it is worth briefly reviewing these ideas.

  • Deliberative Democracy: Decisions should be taken in a way that grants everyone the same opportunity to participate in the discussion — deliberation, and not just voting, should be central to the decision process. Deliberation should be in principle facilitated by digital platforms, but so far human nature has been in the way: fake news, isolated political communities, bullying and trolling covered by anonymity, and racist chatbots have dominated online discussion. And yet. New promising platforms are emerging, and there are even calls for the use of deliberative democracy tools to solve deadlocks caused by today’s political systems.
  • Direct Democracy: Decisions on all issues should be taken through voting. Digital platforms, such as DemocracyOS, offer an ideal tool to elicit citizens’ preferences. Voting on every single issue may, however, be impractical or undesirable — what do I know about fiscal policy or waterway regulation? Complementing Direct Democracy, Liquid Democracy would allow me to delegate my vote on unfamiliar issues to trusted experts whose opinions I expect to share.
  • Wiki Democracy: Laws should be collaboratively written and edited, and perhaps encoded and recorded in a system of smart contracts. As pointed out by Jarod Lanier, under a Wiki Democracy, “superenergized people would be struggling to shift the wording of the tax code on a frantic, never-ending basis.” Not to mention if bots got into the game.
  • Data Democracy: Decisions should be taken on the basis of data continuously and uniformly collected from all citizens and agreed-upon machine learning algorithms. Data Democracy would obviate the need for constant voting, but how would we define a consensual and transparent moral framework to inform the operation of the algorithms? Where would Data Democracy leave human will and conscious participation to political life?
  • AI Democracy: Laws, and the corresponding code, should be written by an “AI” system. AI could implement policies by directly responding to the individual preferences of citizens or groups of citizens. As for Data Democracy, AI Democracy raises key issues of transparency and consensus. It also easily brings to mind Iain Banks’ AI-based Culture hyperpower and its clashes with civilizations that do not share its underlying moral framework.

As seen, digital technology would be a key enabler for all these new and old forms of democracy. Behind it all is the figure of the engineer working on the code, the algorithms, the data bases, and the platforms. At the time of the first Democracy, Plato wrote that:

There will be no end to the troubles of states, or of humanity itself, till philosophers become kings in this world, or till those we now call kings and rulers really and truly become philosophers, and political power and philosophy thus come into the same hands.”

Today, at this critical junction in the history of democracy, it may be that the only way to avoid a dark future for states and humanity is for engineers to become “philosophers”, educated in the consequences and implications of their design choices.

Before Pressing “Submit”

Image result for icaro azzurro“I do not care to show that I was right, but to determine if I was. […] Yes, we will restore everything, everything in doubt. And we will not proceed with seven league boots, but at a snail’s pace. And what we find today, tomorrow we will cancel from the blackboard and we will not rewrite it again, unless the day after tomorrow we find it again. If any discovery follows our predictions, we will consider it with special distrust. […] And only when we have failed, when, beaten without hope, we will be reduced to licking our wounds, then with death in the soul we will begin to ask ourselves if by chance we were not right […]” (Galileo Galilei as imagined by Berthold Brecht)

Neuromorphic Computing: A Signal Processing Perspective

“OK Google, do not transfer any of my data to the network.”

In these days Image result for privacy machine learningof GDPR and AI-enhanced toasters, can we keep our personal data private while still enjoying the convenience of mobile assistants, autonomous navigation systems, smart home appliances, and health-monitoring wearables?

The question is central to many discussions on the future of our societies. While for now “consumers on the whole seem content to beat a little totalitarism for convenience”, as Tim Wu wrote, things may soon change as more and more scandals around the use of personal data are undermining the trust in the tech giants. This is both a cultural and technological problem.

I have written before about neuromorphic computing as a promising technology for the implementation of low-power machine learning algorithms on mobile and embedded devices. Local processing on personal devices contrasts with the edge- or cloud-based learning of standard solutions, including within 5G, and could help developing privacy-minded AI applications.

Since my original post, a lot has happened in the field. Commercial products are now available thanks to the emergence of start-ups such as BrainChip; Intel has launched its own neuromorphic chip and supported a programming platform developed by Applied Brain Research, another start-up; and new applications in computer science and engineering are emerging that take advantage of the energy efficiency of the technology.

My fascination with the field has also not waned. But finding good references to recommend to engineering students and researchers interested in an introduction to the topic is not easy.  Most previous work has been done in the computational neuroscience, where conceptual aspects are often lost amid biological arguments and lengthy asides on relationships with existing hypotheses on the operation of the brain. In the hope of providing a friendlier entry point, I would like to share a tutorial prepared with Hyeryung Jang, Brian Gardner, and André Grüning:

Feedback and comments are as always very welcome.


Before and After Facebook

It’s 2002, Myspace has yet to be founded, the Facebook30342903 we know is still years away, and Google is a private company. With uncanny foresight, a writer in Iceland sees through the following years of social networks creeping into the lives of millions through computers and mobile devices to imagine the ultimate, pervasive and intangible, social web. Tapping into the communication system used by flocks of birds, the LoveStar corporation develops a technology that enables people to communicate directly with one another — and with LoveStar’s employees — via their thoughts, with no need for gadgets or physical proximity. Everyone is seamlessly connected in a gigantic virtual social network.

What is most striking about Andri Snær Magnason’s imagined future is that he also foresaw the main commercial use of a universal platform connecting users and corporation: advertising. Unlike the corporations ruling the web today, LoveStar could bypass the burdensome step of capturing one’s attention in order to sell it to a third-party. Instead, people’s speech centers could be directly rented out and hijacked to automatically relay advertisements or post-purchase praise for targeted passerby, friends, or schoolmates. “A company could have its name substituted for hello, bye, really, yes, no, black, or white”, forcibly infiltrating any conversation. 

Today this sounds more like a business plan than science-fiction. Or in the author’s words:

“when it came out in 2002 it was called a dystopian novel; now it’s being called a parody. We seem to have already reached that dystopia.”

The novel ends on a somewhat positive note, but not before untold suffering is unleashed on the connected humanity. It may be time to start thinking about how to change course in our own timeline.

The New Futurological Congress

In 1971 — writing in what was then the Soviet Union, was previously Poland and would later become Ukraine — Stanisław Lem imagined “The Futurological Congress” Held in Costa Rica, planet Earth, the “first item of business [of the congress] was to address first the world urban crisis, the second — the ecology crisis, the third — the air pollution crisis, the fourth — the energy crisis, the fifth — the food crisis. Then, adjournment. The technology, 24070877._SX540_military and political crises were to be dealt with on the following day”.  The conference room was overcrowded with representatives from many countries, so, to “help expedite the proceedings, […] the lecturer would speak only in numerals, calling attention in this fashion to the salient paragraphs of his work. […] Stan Hazelton of the U.S. delegation immediately threw the hall into a flurry by emphatically repeating: 4, 6, 11, and therefore 22; 5, 9, hence 22; 3, 7, 2, 11, from which it followed that 22 and only 22!! Someone jumped up, saying yes but 5, and what about 6, 18, or 4 for that matter; Hazelton countered this objection with the crushing retort that, either way, 22.”*

What type of futurological congress can we imagine today for the next century? The agenda would conceivably be a carbon copy of that from Lem’s assembly — today’s ecological, energy, political, and military crises differ in details but are no less daunting than in the 70’s — with some added sessions on cyber-security, bio-technology, and education. The scene could, however, look rather different. As the few delegates walk into the room — some virtually, some physically — their local AI platforms would interface with those of their colleagues from other countries.  The low-level discussions would then take place at the speed of the AI algorithms ping-ponging numbers and only occasionally requesting some high-level directives from the human delegates. (Doing A, may cause war with a 10% probability, but may otherwise increase our GDP by 5%, choose A?) Countries without AI capabilities would be excluded from the proceedings — after all, those countries would be mostly depopulated and mined for energy resources to run the personal AI assistants of first-world citizens. The representatives would be creative types with non-technical backgrounds, trained to make decisions quickly based on instinct, while the AI algorithms would take care of all practical aspects.

Or so we could be led to imagine based on a number of popular books and editorials by today’s experts and gurus. The last 30 years of software development may instead conjure up scenes of representatives scrambling to get their software to boot and algorithms to connect, of sessions lost to updates and bugs while trying to contact the lonely person who still remembers what the algorithms do and how they were programmed.

More seriously, at issue here is the understanding of what an algorithm is. As Steven Poole writes  on The Guardian, the “word sounds hi-tech, but in fact it’s very old: imported into English, via French and Latin, from the name of the ninth-century Arab mathematician al-Khwarizmi. Originally algorithm simply meant what is now called the “Arabic” system of numbers (including zero). […] To this day, algorithm is still just a fancy name for a set of rules. If this, then that; if that is true, then do this.” In other words, an algorithm does what it was programmed to do under the circumstances contemplated by the programmer. “If we thought of algorithms as mere humdrum flowcharts, drawn up by humans, we’d be better able to see where responsibility really lies if, or when, they go wrong.”

Using algorithms without understanding what they do and when is a form of  proceduralism, “the rote application of sophisticated techniques, at the expense of qualitative reasoning and subjective and judgment,” which may well lead to illogical, unethical, or even deadly outcomes. The problem is compounded by the fact that the “algorithms” that are considered today as AI are not made of logical sequences of if-then-else statements, but are rather sophisticated pattern recognition mechanisms that operate on large troves of data.

Here is to hoping that, prior to plugging an AI into the next futurological congress network, a representative would have to follow Hannah Fry’s suggestion to address Tony Benn’s five simple questions:

“What power have you got?
Where did you get it from?
In whose interests do you use it?
To whom are you accountable?
How do we get rid of you?”

* 22 meant “the end of the world”.

Make AI Do it

1200px-23rd_St_Park_Av_15_-_Make_Google_do_itWhat shall we do tonight? You ask.  Friend A is flexible — You choose — while Friend B has a strong opinion and lets you know it. Which friend is being kinder to you?

In “Algorithms to Live by” , the authors argue that Friend B is the more generous of the two:

“Seemingly innocuous language like ‘Oh, I’m flexible’ […] has a dark computational underbelly that should make you think twice. It has the veneer of kindness about it, but it does two deeply alarming things. First, it passes the cognitive buck: ‘Here’s a problem, you handle it.’ Second, by not stating your preferences, it invites the others to simulate or imagine them.”

If we allow that deciding on a plan for the evening is mostly a matter of computation, this computational kindness principle has evident merits and I, for one, am nodding in agreement. And so are also the big tech companies, all furiously competing to be your best Friend B. The last campaign by Google — Make Google do it. — makes this plain: The ambition is that of thinking for us — giving us directions, telling us what to buy, what to watch, where to go, whom to date, and so on.

The amount of cognitive offload from humans to AI-powered apps is an evident, and possibly defining, trend of our times. As pointed out by James Bridle in “New Dark Age: Technology and the End of the Future“, this process is accompanied by the spreading “belief that any given problem can be solved by the application of computation”, so that

“Computation replaces conscious thought. We think more and more like machine, or we do not think at all.

A typical way to justify our over-reliance on machines as surrogates for our own cognitive capacities is to point to the complexity of the modern world, which has been compounded by the inter-connectivity brought about by the Internet. This view echoes this prescient 1926 passage by H. P. Lovecraft as cited by Bridle:

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the light into the peace and safety of a new dark age.

But the effects of this unprecedented cognitive offload, even on our health, are at best unclear. Ominously, Vivienne Ming warns that, as the use of apps is depriving our brains of the exercise they have becomes used to over millions of years, we might see widespread early-onset dementia within a single generation.