In the past, individuals have over predicted artificial intelligence, but it does not mean that artificial intelligence in constructing machine has proved greater than the pioneers foresaw. Even in the theme of humanoid robot usefulness – considered at a short term – is not always very clear which solutions they best fit in. Nevertheless, to the majority of us (without any surprise), one thing is crystal clear: the progress in Artificial Intelligence appears to be an ineluctable movement that will manifest – in the long term – autonomous behaviours and higher embodied adaptivity.
The Shakey robot (perhaps you remember it for its tendency to tremble during tasks execution) is well known for having integrated perception with logical reasoning, using it to plan and control physical activity. The ELIZA program demonstrated how a computer could act like a Rogerian psychotherapist. The program SHRDLU showed in the mid-1970s how a manipulator could follow instructions typed by a user, giving answers in English referring to a simulated world of geometric blocks. I mean in the latest decades we learn to modulate our sense of wonderment and the Eureka moment coming from such events. That’s right: systems showing that machines could make music emulating the style of famous composers, exceed junior doctors in particular diagnostic exams and micro chirurgy, autopilots enabled in cars, flights, drones, military machines (think on submarines) and robots.
When we shift our focus to robots being in contact with humans, different paths of analysis and questions on ethics win the title of legitimization.
Yes, that’s true: social robotics, is still a new technology and mainly introduced in the field of elderly care and hospitalization, is rising up either pessimistic or over-optimistic reactions about robots concrete capabilities that accelerate possible risks and benefits related to this (still) new technology.
When we orient our analysis considering that sometimes humans over-predict artificial intelligence, what we do is admitting – sincerely and first of all to ourselves – how progress is slower than predicted and expected. Every one of us – now or in the proximal future – go through these feelings.
But this aspect leaves open just which great difficulties we declare as challenges possible to overcome, and how far we are to win them. I also agree on the fact that sometimes a problem that looks to us like something terribly over complicated, turns out to have an astonishingly simple solution; even if the reverse would still define our daily life condition…
As already happened with other technologies, we can expect that a sort of “social regulation” will occur for robots in our private sphere, homes, personal spaces. When robots become an essential part of our domestic environment they have everything they need to collect information about who we are, our needs, demands, expectation, our abilities and behaviour towards them, either we want to keep them under control or follow a more trustful and experimenting attitude when they (re)act almost autonomously in the physical environment.
Nowadays we have several examples of functional social robots presenting their ability to replicate actions appearing to be intentional, in the way we expect them to be. I mean, this is something amazing!
People should have clear in mind that this form of replication is a typical human trait, just think on when we are tempted to project to others our personal emotions, desires and intentions. Quasi as a sort of Mimesis routine, the same one that the french philosopher of social science René N. Girard formulated in the latest seventies. Following its argument, it appears that our autonomy is elusive, as we project to others the same (or very similar ones) desires, ideas and expectations that we borrowed from others. In such a way, this personal projection assumes the contours of mere imitation. If this is true, humans assume the status of free and imaginative prisoners in such relational mimicry, when imitators and initiators choose together to reduce their differences and distance, resembling humanoid robot and AI algorithms as mirrors capable to capture and project our inner phantoms, cognitive capabilities and fantasies.
Another scenario to be followed is to see robots through more open and analytical lenses: is through the nature of their functional algorithms that different forms of professionals mirror their abilities to gauge personal limits, embracing the challenge to elevate their cognitive and physical potentialities, aligning them with robots required actions as well as complex structural architecture.
They are the ones who can caliber the opportunity we search to improve and refine our learning and knowledge capabilities, as robots behave as reflective machines that question directly us on how we want to define intelligence and how we would like to see others interacting with us.
This is a stimulus to question our own ethics.
Robots are already way more than just machines, and we are responsible to conceive our their intelligence should be shaped, coding how they would interact with us. Especially if we assume that intelligence is not just data storing, demanding perception and meaningful decisional skills. Well, we already expect and consider the fact that they would improve their feedback through a progressive interaction, learning to categorize our general habits, behaviours as well as social bias, rules and anathemas as a concrete possibility.
As we already saw some weeks ago in the post related to Big Data, Shannon clarified in its theory of information that the primary role of the physical reality to retrieve knowledge through perceptual experiences: our body is the one that opens our intimate self to live incredible subjective possibilities. Yes, what you might have in mind is also true: every one of us lives them in a slightly subjective way, even if we’re not immune to miss many other important kinds of information. I agree on that too. One thing is clear to both: is through the body – still maintaining its excellence as a medium – that one agent experience others, having first understood oneself.
The topic of embodiment has been explored by neuroscience, philosophy of science, psychology, while physiology detains the ultimate word. We perceive and engage thanks to body-environment interactions, encrypted in terms of motor and sensory knowledge that express as a sort of phenomenology our current abilities and potential ones when is time to highlight the mind’s embodied influences, physical enhancement, well-being and social behaviour.
Children are able to show this behavioural response as a characteristic of how they learn to understand their decisions and actions through this active perception and mentalization of the action of others. This process derives by how they learn to predict and anticipate the future actions or factual outcomes of their little friends. Not to mention the effect coming from their main reference when they say hello world!: their parental figures shape their daily routine with an extended repertoire of manual and cognitive activities required to let them fit with the environment, knowing its shapes and characteristics.
Well, here embodiment plays a fundamental role.
I mean, an immediate example is how my little nephews or your children behave. When they play with friends, they might prefer some aspects of the activities as they can replicate, show and better understand some observed abilities of the others. Following the rules of similar mechanisms, they learn to reply to parents non-verbal feedback when is time to order the expected – and necessary – observable light chaos of a playing session.
The thing is that the bigger is our experiences portfolio, considering ourselves and others, the higher would be the chance to clarify what is our Self and what signify (and imply) the Others sphere of action and existence: another way to explore and question Ethics when we are required to mix them, reasoning on how Self-Others representations, empathy, mirror neurons and embodiment may have an impact in how robotics design intelligent architectures aiming to have a robot with ethical skills.
This premise on embodiment and the distinction between the Self and the Others is a good start to underline the difference existing between strong AI, saying that the robot is able to achieve intelligent behaviour through its perceptual experience as a medium that is not yet fully under human control during tests, and weak AI, saying that robot intelligence is a defined, precise, controlled engineering craft. Strong AI resembles how our brain process information in a distributed, analogical and noise robustness, while weak AI claims the opposite resembling more to synchronous, digital and error sensitive processing of information.
Robots able to rely on their sensorimotor algorithms and hardware have better chances to transliterate perceptual experiences and memories into a clear language for who is listening to it, curious to know more, questioning its nature and original beginning.
Forms of interaction that are open-ended according to potentially novel situations. A kind of co-constructive process directed to change itself during the time, recurring to adaptive learning systems enabling how to learn to understand, anticipate and schematize others intentions as information we refer to in every moment. Knowledge and subjective conscience are able to address these ethical aspects, in the viewfinder of people that are sensitive and aware of their dynamical reflexivity.
For us, the exercise to think about how agents evolve in a dynamical environment, transcribing memories and actions into the language of sensorimotor circuits, help us to design systems able to deal with what can be defined as the “unexpected” through a feedback mechanism.
The latest makes any system adaptive, sensitive, robust to dynamic changes that require self-organization to respect adaptivity and open-ended interaction. The one that can translate information into knowledge. A co-constructive learning process that appears capable to let the robots recognize, anticipate and logically understand others’ intentions.
Alan Turing proposal, saying that any robot should interact with humans – by trial and errors and imitation mechanisms – to increment his experiences and comprehension of the real scene modeled, just like a child does in the developmental process.
The element that embodies both our first challenge then intimate fears, coming up on how noise and errors mediated by the environment can guide robots to a self-adaptation – known as reinforcement learning – leading to a robust performance when is time to detect new and uncertain settings.