Social robotics is a rather new field of research (and development) that aims to design purely interactive robots or to provide service robots with social interaction capacities to facilitate their integration into society. Although the social trend of human-machine interaction (HMI) is relatively new, some “sociobots” are already available on the market or active in certain environments such as nursing homes, hospitals, hotels, companies and shops. Without falling into a deterministic vision of our technological future, we could soon find ourselves increasingly confronted by artificial social agents, so “living with AI” might also mean “living with robots”. But what should these robots be? Spoon answers that question with its “for, by and among humans” rule. Let’s examine this proposition.
Robotics for humans: natural interaction
To promote robotics for humans, it is essential to put the stress on the fundamental characteristic of accessibility, by focusing on natural HMI. We call these HMI animal interactions and defined them as the reproduction, in the machine, of interpersonal coordination capacities based on social cues such as eye contact, emotion detection or shared attention. Providing these abilities to objects is meant to create a social affordance, to reduce the use of barriers as drastically as possible. Interacting with our creatures and “using” them is an intuition-based process, and does not demand any prerequisite other than basic sociality.
Promoting natural interaction with embodied artificial agents is also a way to create physical avatars which centralise the different AIs of a given place. If our environment is to become smarter and smarter, it is crucial to let us, on one the hand, be aware of the decisions the AIs can make, and to give us, on the other hand, the ability to understand and control them easily.
In this human-centric approach, the shape of robots is a determinant factor. According to the “uncanny valley” theory coined by the Japanese roboticist Masahiro Mori, the more a robot looks like a human being, the more its defects appear monstrous. Moreover, the humanoid shape, even in its most abstract adaptations, proves to be overpromising about the actual capacities of the artificial agent. The quality of natural interaction relies not only on its technical properties but also on what we project onto the machine. Therefore, expectations that are too high can distort the interacting experience and lead to severe disappointment.
We have opted for hybrid shapes where non-humanoid hardware structures such as robotic arms and interactive screens meet with a digital animal face. The latter is a precious social landmark made to create the best possible human-machine understanding without falling into overpromise. The animal bias is also a way to reassure the users by making the technology as pleasant as possible; it also aims to avoid the fears linked to the technical reproduction of humans which are deeply rooted in European culture. However, the matter of shape strongly depends on the cultural context of the robot’s conception and integration. Japan, for instance, is much less reluctant than western countries to adopt humanoid shapes. From a pragmatic point of view, the current golden rule for social robotics design consists of shaping the artefact in accordance with its actual capacities.
Robotics by and among humans: a collective conception of AI for collective social robots
Our second belief is that creating social robots’ intelligence requires a collective by design approach. This is a way to establish a constant reminder of the technologies’ human genesis, while openly favouring the collective determination of technologies. As accessibility and collectivity are cornerstones of our outlook, we focus specifically on public spaces: stations, public transports, malls, hotels, receptions, monuments etc.
The technical industry as a whole, and consequently society itself, tend to forget the human origins of technologies. This is particularly true for the AI industry, whose objective focuses on the technical reproduction and the automation of cognitive processes. One of the most significant examples of this forgetting is the Go match that pitted Lee Sedol against Google Deepmind’s algorithm AlphaGo in 2016. The Go player only managed to win the fourth round, exploiting a flaw of its “adversary”. We stand with those who think that Lee Sedol didn’t really lose against an AI – or only apparently – but against multiple human bits of intelligence specialising in AI and committed to designing AlphaGo. Seen from this perspective, managing to win the fourth round can be considered an astonishing masterstroke.
One way to remember this human origin of AI and technology, in general, is to allow the design process to reach out to a real human environment. To make this ambition concrete, our creatures have a learning scenario called Agora, in which everyone is free to teach them as an open-source programming platform. As its programming is transferred to its human environment, the robot is thought to become a local social network fed by all the different users’ interactions. If you ask a question to get information that is not part of its pre-programmed knowledge, its answers will depend on what it has learnt from other users. Every response can be positively or negatively assessed by the users, to ensure collective downstream design.
Agora is meant to empower users by allowing them to participate in the creation of the artificial creature and to steer the technical design towards a society in the loop direction. This bottom-up approach is thought to be a first step towards integrating people into the design process and shaping social robots on a collective intelligence basis.