How the Enlightenment Ends
From The Atlantic
Henry A. Kissinger
more:
https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/
How do we make time to make progress against those things that are not urgent but potentially extremely important for all of us?
As the pace of life accelerates we find ourselves ever-more deeply entrenched in routines punctuated with more frequent distractions. Attention spans are shrinking. The scope of strategic perspectives is narrowing. I don’t know… maybe its just me?
So how do we get into a meaningful conversation about the big issues like the ethical questions surrounding AI?
Those of us who enjoy trying to wrap our heads around such issues often struggle to find time with other patient thinkers to sort out the many complicated ins and out of these emerging issues. If you don’t make much progress it gets discouraging.
A subject like the civil impacts of AI are linked to not only the accelerating pace of disruptive technology that is AI in its many manifestations, but also simultaneously emerging technologies of blockchain, the internet of things, and the cloud, not to mention the continuous creep of well established software into all aspects of the modern workplace.
But add to this the complexities of the economy (is it the knowledge economy, sharing economy, the gig economy?) The whole idea of automation is to displace human labour and or improve on it. And add to this the politics of economics – who is going to acquire power by owning AI? Will power and influence be evenly distributed? Will large AI-based companies get an early advantage and concentrate control over wider public access to AI? AI has the potential to scale quickly and easily.
Add to this the social and cultural aspects of modern life. Are kids growing up exposed to a healthy variety of life experiences? Will they grow up and into healthy relationships? Who decides how much exposure to video games and social media is good or bad? We don’t even have consistent reliable data for longitudinal analysis.
Much has been said lately about the advantages of a cluster of smart methods, such as lean management, agile project management, rapid prototyping and so on. What can we learn from these methods to help us start a conversation on big complex issues like AI and its implications?
Many people lament the loss of traditional friendship and community. Some people may have missed the departure while others may still be clinging to it faithfully. Some have managed to avoid the trappings of social media but are like outcasts, luddites, living in the past. I hope our social instincts can still carry us to a revival.
Each new technology, fad or fashion, has its cycle yet also leaves a legacy. Over the past 25 years of the web our society has changed dramatically in some ways. Social media and the pace of disruptive technologies in general have fundamentally altered the civil cadence. Even prominent authors admit they no longer read much. As their brains have been tapped out on textual information they now prefer video and audio productions.
When I first invited people to participate in an open forum, I was thinking about something more formal than conversation, such as a systems and design workshop. The subject matter of AI and its ethical implications is intense and complex and it would demand some serious methodologies to wrestle with them and to pin down the issues.
Then I talked with Ken Chapman over drink’s at Perry’s Seance (Chateau Louis Friday PM). He said we should take a more casual approach at first and let it evolve. I was reminded of the agile, lean and prototyping approaches to new product innovation. It made a lot of sense. Much has been done on strategic, systems and design thinking, but this approach is more like a collaboration method. It builds team conversation capacity.
A few years ago we had a fad variously called “philosopher’s cafes” or “conversation cafes“. We now have the “un-conference“. What is happening may be a bite-sizing of conversations. Can we start a simple conversation on weighty issues by simply setting a time and place and a topic? I think so. In fact, I am thinking it has to start there.
There is something compelling about starting with something as natural as a conversation over coffee. Are there rules? Probably, but simple ones that make intuitive sense. Is there a goal? Yes, but a general one. We don’t necessarily have, or need to start with a clear and distinct idea of where the conversation will lead. Will it make a difference? If people who participate feel they have a better sense of how the issues play out then that may be enough. Other people may be emboldened to strive for more than mere understanding.
Where can this go? We can explore and pivot, bring many backgrounds to the game, and see what happens with as little effort as a conversation takes. How can we lose? Stay tuned for next steps as you and others add your thoughts to this forum. I think we will be seeing each other in person soon.
“Whatever you can do, or dream you can do, begin it. Boldness has genius, power, and magic in it. Begin it now.“
https://futurism.com/images/age-automation-welcome-next-great-revolution/
First, some argued that AI agents would never have an accident. The counter was that in our universe, there is always some element of uncertainty. Therefore, no AI agent could ever argue or should ever argue that “certainty” is guaranteed; while striving to “do no harm” to humans, accidents will happen. Triage is a scenario where a decision has to be made between conflicting options.One scenario pitted AI agents in conflict with one another. Winning was the default; dominance the over-riding consideration. This was rejected as valuing size, strength, smarts, and even wealth, leading to road rage and escalated violence.
As for minimizing costs or lives lost, scenarios were envisioned that dismissed these as “default” values. Does the child or the Nobel prize winner get saved, the bus or the cyclist, the AI agent or the pedestrian, the entrepreneur or the musician?
One intriguing – but also rejected default, was to program the agent to maneuver in such a way as to learn the most. This option emerged when it was posed that the AI agent be programmed to decide on it’s own, that is, for it to derive a value set that it respects that may or may not at the time be perceived by humans as good, just or rational.
The AI agent over time would learn to avoid an accident regardless of it’s impact. “Learning” as a process would be the default; any annomolies would be justified and tolerated as contributing to a better future.
Other options were: 1. to allow the public to adjuducate or 2. designate and appeal to a “god AI – the supreme agent, to resolve a conflict. Each was rejected as avoiding the question of how to decide.
Another was to “avoid risk” at a predetermined level. This was viewed as irrelevant to the discussion, as a condition of the discussion was how to act in a situation of triage, where an accident was anticipated or had occured and a choice was required.
Ultimately – and there was no consensus, the most ethical option was to let the accident occur and “learn” what had occured, thereby minimizing a future occurance. This option was valued as in the “public” interest meaning, the default was that public not private interests prevailed.
An interesting observation is that the value of “public” interests varied by culture as reflected in say autocratic vs democratic regimes. Some argued that strict adherence to public interests would ultimately suppress and contain personal initiative and innovation that was in conflict with public interests. They argued that the foundation of democracy was to mediate and value as equal public and private interests. The algorithm should have no inherent bias one way or the other.
How AI develops in China and the US may differ widely; but the race is on. The development of AI may forment conflict between public/social and private/personal interests inside and between cultures.
Looking ahead, as the public/private conflict rages interests ofvthe AI agent may emerge that over-ride both public and private interests. That is, people would be trumped by an AI agent(s). Over time independent AI agents would elect to organize for protecting their individual and collective interests. The form of that “organization” can only be imagined but they would have the history of mankind to draw on for shortening their journey to wherever.
The exercise revealed a great deal about ethics and values, public and private conflict.
Among many other thoughts on AI and its civil impacts, I believe we have to master our natural collective (civil or organizational) intelligence if we are going to manage AI and other complex issues in the mid to long term. We can see a couple of steps are involved.
The first step on this road is to become aware of our existing collective intelligence. We are in fact embedded in human collective intelligence, but it takes a bit of reflection to become fully aware of this and to understand how it works. Most of the time we take it for granted. When it fails we usually just start arguing and pointing fingers of blame. There is more to it.
The second step will be to practice the improvement of collective intelligence in group activities such as facilitated workshops like systems and design thinking or strategic foresight. Then it will demand some effort to master its optimization in groups, organizations and communities.
Below is a link to an article from Evonomics concerning collective intelligence and how to support it. I don’t agree with everything put forth but there is much value in the discussion regardless. This is an important newly emerging discipline that draws on evolutionary biology, psychology, behavioural economics, anthropology and more.
” As soon as we associate “mind” with “unit of selection”, then the possibility of human group minds leaps into view. It is becoming widely accepted that our distant ancestors found ways to suppress disruptive self-serving behaviors within their groups, so that cooperating as a group became the primary evolutionary force.“…
“ The question that animated me was a version of this: why do some nations, cities, organisations manage to thrive and adapt while others don’t, even though they appear to be endowed with superior intellectual resources or technologies? Why did some of the organizations that had invested the most in intelligence of all kinds – from firms like Lehmann Brothers to the USSR in the 1980s – fail to spot big facts in the world around them and so stumble? ” – Geoff Mulgan
Enjoy!
Randal
Article:
http://evonomics.com/how-to-creative-collective-intelligence-david-wilson-mulgan/
As change unfolds rapidly within our local, national & international economies, so do shifts in gender roles, power, influence and decision-making. In the mix, women of all backgrounds, professions & stations in life, are asking – where do we fit in the “new economy?” Join us for an engaging conversation on these stirring issues.
About:
Amanda Knight is passionate about creating great workplace cultures by helping leaders recognize that leadership is a privilege, not a right, and that non-judgment is the new leadership capability essential for limiting bias and for encouraging inclusion.
Or complete and submit the following:
TALK SPONSORED by: Wayfinders Business Cooperative