We will need AI safety measures first before the technological singularity consumes us but this is in direct conflict with the profit motive of the wealthy. Knowingly, the wealthy will fight tooth and nail to maintain their profits while jeopardizing all of humanity to the point of enslavement or even extinction. Maureen points to the de-regulatory role of Republicans regarding Artificial Intelligence in pursuit of higher profits when she says,
One can just look at the recent growth of Republican deregulation in the areas of business, science, climate change, and the environment, just to name a few, to see the lack of oversight and control for the sake of making larger profits.
But this is neither the time nor the field to deregulate. To the contrary, it is regulatory oversight and policy control we so critically need now both nationally and internationally to guide the growth of the emerging technology.
The dangers of sailing into uncharted waters are real. And so is an artificial neuron that fires 200 million times faster than the human brain as we just read about in the previous post. But this speed is just part of the equation. Once creativity, imagination, intuition, and emotional aspects such as anger, jealousy, and love are dissected by optogenetic techniques, a synaptome will chart all of these synaptic connections and the organic wet human brain will unequivocally no longer be needed.
She also concludes by stating,
What could possibly go wrong? Technology has always come with guarantees only to routinely disappoint us. Remember the Titanic that could never sink? As we blindly venture forth into this new technological world we may potentially poison humanity's chances for survival only to discover there is no antidote.
Her fear is bolstered by this next article with an initial quote from a researcher in the field of AI,
Can we stop AI outsmarting humanity?
The spectre of superintelligent machines doing us harm is not just science fiction, technologists say, so how can we ensure AI remains "friendly" to its makers? By Mara Hvistendahl Mar 28th, 2019
https://www.theguardian.com/technology/ ... ingularity
It began three and a half billion years ago in a pool of muck, when a molecule made a copy of itself and so became the ultimate ancestor of all earthly life. It began four million years ago, when brain volumes began climbing rapidly in the hominid line.
Fifty thousand years ago with the rise of Homo sapiens sapiens.
Ten thousand years ago with the invention of civilization.
Five hundred years ago with the invention of the printing press.
Lose yourself in a great story: Sign up for the long read email
Fifty years ago with the invention of the computer.
In less than thirty years, it will end.
The above quote by Yudkowsky caught the attention of Tallinn who then expressed his reaction when he said,
Reading Yudkowsky's article, Tallinn, who is a proponent of AI safety, became convinced that superintelligence could lead to an explosion or breakout of AI that could threaten human existence, that ultrasmart AIs will take our place on the evolutionary ladder and dominate us the way we now dominate apes. Or, worse yet, exterminate us.
The encounter with Yudkowsky brought Tallinn purpose, sending him on a mission to save us from our own creations. He embarked on a life of travel, giving talks around the world on the threat posed by superintelligence. Mostly, though, he began funding research into methods that might give humanity a way out: so-called friendly AI.
"Friendly" means something much more fundamental: that the machines of tomorrow will not wipe us out in their quest to attain their goals.
Tallinn warns that any approach to AI safety will be hard to get right. If an AI is sufficiently smart, it might have a better understanding of the constraints than its creators do. Imagine, he said, waking up in a prison built by a bunch of blind five-year-olds. That is what it might be like for a super-intelligent AI that is confined by humans.
The remainder of the article presents more theoretical views as to how we can stop AI from annihilating humanity. The point is, no one knows for sure how to do this.
The strong of mind are welcome to review all the various containment options presented in the article and draw their own conclusions. As for me, I have reached the conclusion we need a new starting point and the article below best expresses my thoughts on the matter,
Ensure Artificial Intelligence Safety Before Worrying about the Singularity
Naveen Joshi 18/03/2019, https://www.bbntimes.com/en/technology/ ... ingularity
Mr. Joshi, like others before him, lays out his argument that AI can easily stray from the straight and narrow path of being tamed to our will with a dystopian future descending upon us sooner than expected without corrective actions. This is standard fare and the reader is welcome to read the original article as he presents his case. But where he excels, and where I resonate with his writing, is where he state in the very title of his article,
Ensure Artificial Intelligence Safety Before Worrying about the Singularity
and then goes on to say,
With artificial intelligence (AI) research and development progressing at an unprecedented rate, artificial superintelligence seems closer than projected by most. It is more imperative to secure AI safety now as any later might be too late.
This to me seems like a reasoned and measured approach to the problem. Putting the cart before the horse, or as it is bluntly said, doing things a$$-backwards, is not a viable approach. And yet this is exactly the approach the conservatives are using as Doctor A states in his post #6,
) A Trump presidency now tips the scale in the direction of a conservative ideological domination with the wealthy elites determined at all costs to control the technological Singularity. This is why your ideas and solutions are so important now, not later. In Singularity time, later is always too late. It is a now-or-never moment with the outcome of humanity hanging in the balance. Clearly, the FirstRateCrowd community business venture gives us a means to tip the scale back in our direction.
Trump and the Republican Congress will do anything to maintain their conservative agenda. They are determined at all costs to control this agenda which in turn largely supports their control of the technological Singularity. As an example of this determination, President Trump fired the FBI director, James Comey, as a clear affront to the rule of law. This transgression was and continues to be supported by the Congressional Republicans. It is an unabashed attempt to derail the FBI's investigation into the role of the Russians in the 2016 Presidential election and the possible role that President Trump and his administration may have played a role in this attempt. Such naked partisanship cannot hide their true intent; they will place party and conservative values over the security and good of the country at all costs. This determination and tenacity for their own agenda is expected to continue unabated until, if we are fortunate, a monumental change is made either legally or politically. But in the interim a great many individuals will suffer.
In summary, the future wealthy elite will use every means of control at their disposal and at any cost to maintain their predatory dominance over the poor. This includes the manipulation of future laws on their behalf with an obvious outcome being an increase of human rights abuse against the less fortunate.
By all means we must stop this folly. Cooler heads, that is to say progressive minds, will need to prevail or we are toast. Controlling the process of containing AI before it is too late should be mandatory law and if this cannot be done then proceeding down an unknown yet potentially toxic avenue should be excluded before we proceed.
Our approach must be proactive and not reactive. Once the genie is out of the bottle the old method of crisis management will not work because time is not on our side. There is a relative disconnect between the time frame humans perceive and what is going on inside an algorithm internally. This is because human brains process information in a linear fashion and we cannot really understanding the machinations of what is taking place regarding processes at exponential rates in any real sense.
For example, the 22 month long Bob Mueller report was just released and no substantial laws have been created to deal with its findings yet or to create any corrective actions especially regarding the Russian's role in attacking our electoral system. Forget the 22 months. Instead lets look at just 22 seconds where a system is expanding outside our control exponentially on a path to do us harm. Just 10 iterations exponentially of these 22 seconds produces a relativistic time lag of 22 to the 10th power or 265599 seconds which is equivalent to 3.07 days. But to us humans, this is perceived as just 10 linear periods of 22 seconds (10 x 22 = 222 seconds) or 3.7 minutes. In fact, what we humans view as 3.7 minutes is actually producing 3.07 days worth of toxicity in real human terms.
Ahead of us will be an ever increasing number of exponentially expanding algorithmic damage events that we will have to deal with in ever shortening time frames. Contending with this more rapid exponential reality as it barrels down the road towards us means we will not have the luxury of time to solve the problems at hand let alone the ever increasing sets of new problems caused by not solving the last set of older problems. We will be overwhelmed and consumed by these problems unless we apply the breaks before the inevitable crash occurs. The implementation of rules and regulations to control Artificial Intelligence is needed now, not later. An unfettered de-regulatory laissez faire approach promoted by the wealthy profit motive is not sufficient to protect us and guarantees a dystopian apocalyptic future due to our own cleverness. Let us be smart, not clever, and guide the emerging field of Artificial Intelligence with safety measures first before it devours us.
Without stringent regulations put on Artificial Intelligence, it is no longer a question of whether or not the technology produced from its development will eat our lunch; rather, when will it eat us for lunch.