In mid-February of last year, my family took the momentous decision to adopt a shared calendar — an actual paper calendar to keep track of travels, classes, and various engagements. Everyone’s schedules were getting too complex, and we thought that a paper trail of evidence would limit opportunities for recrimination. Ten days layer, the calendar was demoted to the role of scratch paper, trips no longer a possibility, classes cancelled, engagements reduced to zoom calls.
The “AIs” on my smart phones did not register the change, sending daily reminders about the best way to get to work, going as far as suggesting the best route by car (I have not owned one since before the time the words “smart” and “phone” meant anything together). My two-year-old son, instead, took the change in stride, learning quickly how to navigate upturned schedules and moods, masks nothing more than what (some) people wear to go out for a walk.
The AIs were stuck within the “machine”, crunching data in search of regularities that did not exist; while my son took reality in, connecting the dots between behavior and consequences, between words and objects, between symbols and the real world.
Which leads us to the use of the term “semantics”, that has gone through changes of its own in recent years: Following Brian Cantwell Smith‘s phrasing, in computer science, semantics has come to refer to the behavioral consequences, within the computer system, of a program being executed, while its traditional use describes relationships and consequences of a given program, idea, or action in the real world. According to the new definition, the AIs in my smart phone are engaged in semantically meaningful tasks, while the old definition would assign the “semantic badge” only to my son’s efforts.
There are, of course, ways in which the AIs can receive feedback from the real world. Think of robots finding optimal trajectories in an environment via reinforcement learning; or the (infamous) A/B testing. But they are all mediated by some algorithmic encoder converting real-world inputs, such as user experience, into numerical quantities, such as binary feedback. Uncertainty, imperfections, and complexities of the real world are simplified in ways that may distort their actual significance. Paraphrasing again Smith, in the words of Paul Taylor, “if we seem to inhabit a world that is constructed of well-defined objects exemplifying properties and standing in unambiguous relations, that is an achievement of our intelligence, not a truth that can be used when engineering an artificial intelligence”.
Following the lead of computer scientists, communication technology researchers (yes, like me) have also started using the term “semantics” to refer to goal-oriented communications, which moves past the classical goal of transferring bits towards the aim of transferring actionable information. This approach inherits the philosophical limitations of the terminology used in computer science. One may be justifiably suspicious about the need for introducing this new language. Time will tell if this will go the way of my family’s first paper calendar or if it is here to stay.