The race is on. “The future of work” is but one dimension…

Reposted from Perry Kinkaide

The race is on. “The future of work” is but one dimension of any discussion of the impact of AI as a positive and/or negative force on everything, everywhere now and into the future. Parallels in history are the advent of anything that advanced knowledge and it’s spread from the alphabet to the printing press from political to educational institutions, from the wheel to the computer, from the advent of money to the creation of the internet.
At a recent seance #1-2018, we discussed “triage”, the design of an algorithm for deciding how to decide winning and losing in an accident. Various scenarios were considered and each surfaced a different set of ethics.
An over-riding assumption was that the driver – an AI agent, had access to all knowledge and the entire environment including the history of any other agents or people involved. It would for example be able to triage what/who to save and may therefore be able to mitigate the loss of any preprogrammed value(s).

First, some argued that AI agents would never have an accident. The counter was that in our universe, there is always some element of uncertainty. Therefore, no AI agent could ever argue or should ever argue that “certainty” is guaranteed; while striving to “do no harm” to humans, accidents will happen. Triage is a scenario where a decision has to be made between conflicting options.One scenario pitted AI agents in conflict with one another. Winning was the default; dominance the over-riding consideration. This was rejected as valuing size, strength, smarts, and even wealth, leading to road rage and escalated violence.

As for minimizing costs or lives lost, scenarios were envisioned that dismissed these as “default” values. Does the child or the Nobel prize winner get saved, the bus or the cyclist, the AI agent or the pedestrian, the entrepreneur or the musician?

One intriguing – but also rejected default, was to program the agent to maneuver in such a way as to learn the most. This option emerged when it was posed that the AI agent be programmed to decide on it’s own, that is, for it to derive a value set that it respects that may or may not at the time be perceived by humans as good, just or rational.

The AI agent over time would learn to avoid an accident regardless of it’s impact. “Learning” as a process would be the default; any annomolies would be justified and tolerated as contributing to a better future.

Other options were: 1. to allow the public to adjuducate or 2. designate and appeal to a  “god AI – the supreme agent, to resolve a conflict. Each was rejected as avoiding the question of how to decide.

Another was to “avoid risk” at a predetermined level. This was viewed as irrelevant to the discussion, as a condition of the discussion was how to act in a situation of triage, where an accident was anticipated or had occured and a choice was required.
Ultimately – and there was no consensus, the most ethical option was to let the accident occur and “learn” what had occured, thereby minimizing a future occurance. This option was valued as in the “public” interest meaning, the default was that public not private interests prevailed.

An interesting observation is that the value of  “public” interests varied by culture as reflected in say autocratic vs democratic regimes. Some argued that strict adherence to public interests would ultimately suppress and contain personal initiative and innovation that was in conflict with public interests. They  argued that the foundation of democracy was to mediate and value as equal public and private interests. The algorithm should have no inherent bias one way or the other.

How AI develops in China and the US may differ widely; but the race is on. The development of AI may forment conflict between public/social and private/personal interests inside and between cultures.

Looking ahead, as the public/private conflict rages interests ofvthe AI agent may emerge that over-ride both public and private interests. That is, people would be trumped by an AI agent(s). Over time independent AI agents would elect to organize for protecting their individual and collective interests. The form of that “organization” can only be imagined but they would have the history of mankind to draw on for shortening their journey to wherever.

The exercise revealed a great deal about ethics and values, public and private conflict.