Social robots behavior: attitudes and action possibilities.

Robots are part of our life; we can find them in everyday environments as they are smart, functional and able to exercise autonomous decision. This rapid progress of intelligent solutions is enabled by teams of engineers, system designers and robotics experts dedicated to shape with qualitative and quantitative expertise the agenda of robotics development and interaction with people.

In one specific field, elder care, their system should be responsibly designed (as a litmus test of the nature of their responses) to enable stakeholder’s trust on their behavior, clarifying necessary key aspects of selected processes behind decisions and selective data.

This kind of robot should also be able to reason on the priority of values of its users and other stakeholders, and to report a log on the reason for a particular choice during the relationships attained with humans.

A scene from the American movie Robot & Frank – a science fiction comedy-drama film directed by Jake Schreier and written by Christopher Ford (2012).

In this post, I’ll try to follow the stream of ethical decision making as the ensemble of conditions that give to principles and social norms the power to determine which attitudes and responses are acceptable, linking them to the options a robot have (according to the programmer free will and expertise in robotics control) to choose between different possible courses of actions.

Today robots should adhere to the principles of accountability, responsibility and transparency as standard main characteristics of social robots, as these aspects allow him to report on the reason behind its choices and options taken, as a sort of ethical decision making.

A need that reveals our desire to understand why and how a robot is somehow aware of the principles, values and norms that determine which behavioral attitudes and responses are acceptable and require to be followed.

Higher autonomy is generally linked with increased responsibility, even if these two notions assume a slightly diverse shade when referred and applied to intelligent machines instead of humans. When the system is designed to enable a trustable understanding of the decision and the data being used, then human values, priorities and choices have good chances to be transparently included in the robot decision design process.

Let’s start with accountability, as trusted interaction and ability of the system to justify decisions and actions to its direct users and other humans that could be in the same working area. To ensure a verisimilar matching with the user moral values, selected models and complex algorithms related to machine decision-making, enable robots to represent guiding actions (by forming beliefs and decision making) and explanation, handling decisions and classifying them in a broader context.

Regarding the latter element, a possible approach to developing an explanation method is to apply evolutionary ethics and structured argumentation models: this tool enables the creation of the so-called modular explanation tree where each node can explain other nodes located at a lower level. This means that in every layer, each node encapsulates particular reasoning modules treated as a black-box due to a lack of transparency of AI systems. This approach is even used for stochastic, logic or data-based models based on social heuristics despite the above mentioned moral rules.

You need to imagine that these forms of autonomy assume a scalar variability, from 0 to full capacity, and include several and complex levels of a plan, goal direction, motive autonomy aligned with responsibility; a key factor if the human actor is in charge to evaluate the robot decisions, identifying errors and unexpected results.

Moral responsibility is often associated with the ability of moral deliberation, and the robot should be able to behave in relation to these values. First of all, we need to set priorities between values, teaching our intelligent machine to deal with moral dilemmas (real case scenarios where every possible action will violate one or more values), considering ethical theories that play a constant role when is time to make a decision. Yes, the perverse situation of being between utilitarianism, doing the best for most of our reference target, and Kantian deontological categorical imperatives. In the latter case, the morality of an action should be based on weather the action is right or wrong in itself – under definite rules – rather than based on the consequences that it might express.

I have in mind many human actors that experience scenarios of this kind, from developers, manufacturers, policymakers, and even us, the users, that by themselves play a role in this liability process defining our society new challenges and questions.

The chain of responsibility will continue to grow and become complex.

For example, we can now reflect on who is the true responsible if an automated car will hit a wall without any advice. Is the engineer who build the sensors and actuators? Is the software developer that gives the car the ability to decide autonomously the path to be followed? Is the politician that finding this technology so brilliant allowed its application without any form of restriction? Is the owner of the car that decided to personalize – in such a way – its system decision-making?

Personally, I think that despite the complex deliberation algorithms at our disposal, used to make a machine intelligent to perform actions that have a moral impact, well, they are – and should be sawed as – our artifacts.

Therefore, it is not their responsibility, neither legal nor moral, if something goes wrong.

Social robots learn through algorithms generated by people, and this aspect is connected with diverse sort of shortcomings heuristics to form judgements and to make decisions oriented to avoid mistakes.

Sometimes they are efficient and make the process straightforward and smooth, sometimes they rely too much on practice and this influence an erroneous step in the organization of an argument, or even simpler in the case of a basic misconception of reality.

Yes, I know. Is natural to expect that intelligent robots are able to take decisions and act autonomously. In many cases, these results can be achieved in collaboration with the user, even allowing that the environment guides appropriate actions. Two possibilities play than a role in this effort to complement the robot response. The first remains human control, that within diverse levels,  guarantee a shared awareness of the situation such that the person can intervene in the process and make a decision. The second one is environmental regulation, that ensures and control that the robot never gets into a moral dilemma case scenario. In other words, this deviation is simply impossible to happen as specific constraints make a robot’s moral decision completely unneeded. Just think about the benefits of applying this solution in manufacturing environments.

Responsibility refers just to people: the thinkers, developers and salespeople of hardware and software solutions.

Every robot perspective is shaped by human intervention and mediated ability to detect errors or unexpected results. Because this chain of stakeholders is growing fast, users and unaware people need to be able to understand how robots’ decisions match with fair use of data and add-on features able to define them smart participants in a social context.

Here, the role of education is fundamental. It ensures that the robot is part of the societal awareness as people know how the robots are able to identify and deal with moral societal values and norms.

A scene from the American movie Robot & Frank – 2012

Is at this point that transparency comes in the scene, having the role of the diva in this effort to explain, describe, inspect and reproduce the mechanisms through which AI systems enable the robot to make decisions and learns to adapt to the environment.

Yes, the autonomous decisions that are fully able to be performed, and be aligned with responsible respect of societal values, requiring depth knowledge of multicultural contexts that place human needs at the core of robots’ calls.

In the proximal future, social robots will be an expected part of our lives.  

The chance to let them make decisions able to affect our living routine – within different grades and application scenarios – is already here and is our responsibility to educate ourselves, and maybe even our peers, to give them the importance they deserve.

LH

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s