Pitchforks And Torches Will No Longer Be Able To Stop The 1%

The technological singularity is a point in the near future when technological growth becomes uncontrollable and irreversible, resulting in unfathomable changes to human civilization. It will advance at inconceivable rates and will overwhelm our human capacity to understand it. The technological singularity and economic inequality are two powerful streams that will merge together, massively feeding upon each other to create the ultimate disaster. We will either live in a twisted dystopian reality, become extinct as a species, or both. We must stop this now.

Posted on: » Mon Nov 18, 2019 6:45 pm #41

User avatar
Jessica
Posts: 64
Joined: Fri Apr 14, 2017 3:03 pm
Contact:
REFERENCING: Sterling Volunteer, Post #40, Posted Oct 10, 2019
Even without quantum computers, new non-neuromorphic computer chips may soon be operating at phenomenal exascale speeds. This means they could in short order operate at speeds reaching nearly one thousand times faster than the computers we have today.
...
none

Re: Pitchforks And Torches Will No Longer Be Able To Stop The 1%

Post by Jessica » Mon Nov 18, 2019 6:45 pm

FIGHT INCOME AND ECONOMIC INEQUALITY: Artificial Intelligence (AI) is advancing exceedingly fast and now we have machines creating even faster machines. Some still think the technological singularity is a mere fantasy, a science fiction writers mind gone off the rails. Make of this what you will but I for one think this type of rational deniability about the future is sheer folly and void of any substantive creativity.

SingularityHub
AI Uses Titan Supercomputer to Create Deep Neural Nets in Less Than a Day
By Peter Rejcek -Jan 03, 2018 https://singularityhub.com/2018/01/03/a ... han-a-day/
The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.

The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.

It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.
It seems like just yesterday the concept of computers creating more advanced computers which would then create even faster and more advanced computers was just a concept. Now it is a reality. And yet we still have individuals who cannot see the future forest before the virtual trees as is the case of the skeptical author, Mr. Jason Rhode, in the following article.

Salon
The right-wing politics of the "Singularity"
By Jason Rhode, May 26, 2018 https://www.salon.com/2018/05/26/the-ri ... ngularity/
The problem of proof

I am embarrassed to make such a minor, pedantic point in discussing a question as big as the Singularity, but—well, it's just that, quite simply, there is no evidence to support it.

None. It is speculation informed by fiction. The Singularity is fantasy covered by a patina of rational thought, but. It. Is. Not. Rational. PowerPoint-informed imagination is not sufficient cause for physical existence. Belief in approaching AI, or even mildly human-like AI, requires considerable delusion. Our spreadsheets and databases possess no mindfulness. And the Singularity has all the problems of AI, coupled with the manic imagination of people who spend too much time looking at graphs. Technology does not create itself any more than boy bands do.

The argument behind Singularity amounts to bare extrapolation, and that's it. Look at how fast computers are getting! Wow, what if we extend the line out to eternity? Singularity thinking is like a man who finishes watching the Star Wars trilogy in 1998, right before the Phantom Menace comes out. Then he goes home and plots out the sequels. Allowing for advances in moviemaking, his chart tells him that this next movie will be the greatest event in cinematic history.
I wish to remind Mr. Rhode of the power of fiction and fantasy. Rational thought has its place in the world but so does creativity. His problem is a lack of imagination. The following article illustrates my point.

SingularityHub
Why Companies and Armies Are Hiring Science Fiction Writers
By Marc Prosser -Aug 06, 2019 https://singularityhub.com/2019/08/06/w ... n-writers/
Are you a science fiction writer? Do you have command of the French language? If you can answer yes to both questions, a new job opportunity may be just the thing for you.

The recently formed French Defense Innovation Agency (DIA) is looking to assemble a ‘red team’ of science fiction writers and futurists. The BBC reports that the team will use “[…]role play and other techniques to imagine how terrorist organizations or foreign states could use advanced technology.”

Their job will be to identify possible future disruptions that the military itself might not have considered.

Initially, the idea of an army turning to sci-fi writers to help predict future threats may sound like, well, something out of science fiction. However, it is an approach that has been deployed by military institutions, including NATO and reportedly the US Army, and multinational companies.

Part of the reason is science fiction’s track record of becoming science fact.
1984: Are We There Yet?

In the wake of the 2016 US election, George Orwell’s 1984 made a somewhat surprising return to bestseller lists. At the time, much was made of similarities between utterances coming from members of Trump’s White House and the government of 1984’s Oceania, none moreso than Kellyanne Conway’s ‘alternative facts’ and ‘blackwhite’—both of which are clearly ‘duckspeak.’

Perhaps more sinister are similarities between 1984’s depictions of mass surveillance and the Chinese Government’s social credit system, or big tech companies’ data collection strategies.

But not all science fiction predictions-come-true are as dark as Orwell’s. AI and the idea of virtual assistants both featured prominently in 2001: A Space Odyssey. Arthur C. Clarke, who collaborated with Stanley Kubrick on the movie, wrote extensively of driverless cars long before Waymo was but a twinkle in Google’s eye. Drones featured in the Back to the Future films from the 80s (walking dogs, no less) and Ray Bradbury wrote about in-ear headphones around 1950. The list(s) go on.

Not all science fiction comes true, of course. Personally, I’m still waiting on Marty McFly’s hoverboard and the whale bus that people around the turn of the 20th century thought was just around the corner.
Indeed, I can imagine in my mind's flight of fantasy the creators of the Titan super computer having probably thought about using their new technology upon Titan itself to produce an even more powerful and intelligent version of itself. Maybe creating an even faster, newer and more creative Titan with updated neural networks.

So you see, humanity does not progress by rational thought alone. But I still take heed to what Sterling said in the previous post and think about it often,
In any event, speeds of one thousand or two hundred million times faster than today's computers all seem like a freight train barreling towards us. The computer revolution is coming fast but will anyone be able to control the collisions should the trains go off the tracks?
Times Referenced: 0

Posted on: » Thu Nov 21, 2019 9:46 pm #42

User avatar
Sterling Volunteer
Global Moderator
Global Moderator
Posts: 60
Joined: Tue Jan 10, 2017 6:15 pm
Contact:
REFERENCING: Doctor A, Post #35, Posted Sep 15, 2019
The preceding post #29 makes clear we will need new rules and regulations to deal with the threat of Artificial Intelligence (AI). Yet we do not have a government in place capable of carrying out these directives in an effective manner. Gov...
none

Re: Pitchforks And Torches Will No Longer Be Able To Stop The 1%

Post by Sterling Volunteer » Thu Nov 21, 2019 9:46 pm

Artificial intelligence will need new rules and regulations to protect society. Both good and bad events can come about as AI further integrates itself into our society at break neck speeds. Regardless of the good or bad and moral or immoral situations manufactured by AI, the pros and cons of AI indicates a risk to benefit ratio for its usage. Everything has trade offs and these need to be managed. But I fear many of the negative aspects of AI will no longer be able to be controlled by pitchforks and torches. The truly dark aspects of this new field may never be controlled at all. Nonetheless, all of these situations will require new regulations and even new legal concepts to protect society. A brief summary of both future positive and negative concerns is presented in the following lecture.

The Exponential Guide to Artificial Intelligence
“AI is here today; it’s not just the future of technology. It’s embedded in the fabric of your everyday life.” —Neil Jacobstein, Singularity University Chair, AI & Robotics
https://su.org/resources/exponential-gu ... elligence/
What is clear: AI-powered products and services have made it into nearly every aspect of our personal and professional lives in just a few years. And as AI solutions continue to emerge and converge, that pace of change will only continue to accelerate. It’s easy to find scenarios of a utopian future of abundance where machines do all the hard work—as well as grim scenarios where unemployment soars as traditional workers are replaced by increasingly capable machines.
There is a popular argument that tools like AI essentially are neutral, and can be used for good or evil, depending on the user’s intentions. While AI is unique in that we’re building it to be capable of developing its own learning and “intentions,” it’s realistic to expect that for the foreseeable future, AI will be shaped by the direction of its human creators.

We can say with certainty that AI is such a profound tool that its impact marks a true global paradigm shift, similar to the revolutions brought about by the development of agriculture, writing, and manufacturing.

While the future changes that AI will bring are almost impossible to imagine, we have identified three key benefits and three key risks worth keeping in mind:

Risks of AI

Drastic changes to our lives
AI created with bad intention
AI created with good intention goes bad

Benefits of AI

Increased efficiency
Solving problems for humanity
Liberate humans to do what they do best

What Are the Benefits of AI, in Greater Detail?

In an ideal world, AI represents a win-win scenario by providing strengths that humans don’t possess. Advanced pattern recognition, computing speed, and nonstop productivity courtesy of AI allow humans to increase efficiency and offload mundane tasks—and potentially solve problems that have evaded human insight for thousands of years. Let’s look at some benefits of AI in more detail.

AI offers increased efficiency

We are human, and so we make mistakes and get tired. We can only perform competent work for a limited time before fatigue takes over and our focus and accuracy deteriorate. We require time to unplug, unwind, and sleep.

AIs have no biological body, side-gig, or family to pull their attention away from work. And while humans struggle to keep focus after a while, AIs stay as accurate whether they work one hour or 1,000 hours. While they work, these AIs can also be accurately recording data that will, in turn, provide more fuel for their own learning and pattern recognition.

For this reason, AI is transforming every industry. The amount of time and energy companies have to invest in repetitive manual work will diminish exponentially, freeing up time and money, which in turn allows for more research and more breakthroughs for each industry.

AI is solving problems for humanity

As AIs gain greater capabilities and are deployed in different capacities, we can expect to see many of the problems that have plagued government, schools, and corporations to be solved. AIs will also be able to help improve our justice system, healthcare, social issues, economy, governance, and other aspects of our society.

These critical systems are rife with challenges, bottlenecks, and outright failures. In each realm, human bureaucracy and unpredictability seem to slow down and sometimes even break the system. When AIs gain traction in these important domains, we can expect much more rational, fair, and thorough examinations of data, and improved policy decisions should soon follow.

AI is liberating humans to do what they do best

As AIs become more mainstream and take over mundane and menial tasks, humans will be freed up to do what they do best—to think critically and creatively and to imagine new possibilities. It’s likely this critical thought and creativity will be augmented and improved by AI tools. In the future, more emphasis will be placed on co-working situations in which tasks are divided between humans and AIs, according to their abilities and strengths.

Perhaps the most important task humans will focus on is creating meaningful relationships and connections. As AIs manage more and more technical tasks, we may see a higher value placed on uniquely human traits like kindness, compassion, empathy, and understanding.

What Are the Risks of AI, in Greater Detail?

Will AI change our current way of life? Absolutely. Do we know exactly how? Absolutely not.

AI already is affecting nearly every aspect of our personal and professional lives. Every human institution—businesses, governments, academia, and non-profits—is already experiencing the accelerating pace of change. And although AI is often portrayed in terms of solutions to solve problems in healthcare, transportation, and business productivity, there is also a darker side to consider.

There are concerns that AI will replace human workers, and some people fear the ultimate outcome will be that superintelligent AI-powered machines will eventually replace humans entirely. While this is a possibility, many experts believe that it’s more likely that AIs will enhance, not replace, humanity and that eventually, we might merge with AIs.

It’s essential to think about what might happen when a tool as powerful as AI malfunctions or is used with malicious intent. Consider the following two scenarios:

Scenario 1: AI created with bad intentions

Those who insist that technology is neutral will point out that a hammer can be used to build a home or to hit someone over the head. As with any technology in the wrong hands, AI could be created to help humans commit horrible acts. This might be an autonomous weapon programmed by the military, or a malevolent algorithm set loose by an individual hacker.

Fear associated with AI—a technology that is intelligent and capable of self-learning—is not unfounded. But it’s important to remember that humans also are highly intelligent and capable of rapid learning and improvement.

Moreover, it’s also worth remembering that harmful AI capabilities aren’t created in a vacuum. While one person or group is attempting to create something harmful, there is often an equal or greater amount of energy being invested to stop that harm and create countermeasures that limit risk and impact.

Scenario 2: AI created with good intentions goes bad

Another scenario is the runaway AI, in which a machine that was built with good intentions turns bad—a staple of classic Sci-Fi films like “Blade Runner” and “2001 Space Odyssey.” Indeed, when the sentient computer HAL turned against astronauts in the 1968 Stanley Kubrick film, many viewers found the premise to be unrealistic. With the widespread use of AI, as well as its growing capabilities, this scenario may no longer seem as far-fetched.

Addressing concerns over whether AI will drive massive job displacement, Singularity University Co-Founder and Chancellor Ray Kurzweil explains that while certain jobs will be lost, new jobs and careers will be created as we build new capabilities.

Kurzweil notes that AI will benefit humans and that AI is less likely to be threatening than beneficial to us, and it benefits us in many ways already. In Kurzweil’s view, a robot takeover is less likely than a co-existence, where machines reinforce human abilities and accelerate our progress.
~SV~
Times Referenced: 2

Posted on: » Sun Dec 01, 2019 3:36 am #43

User avatar
Jessica
Posts: 64
Joined: Fri Apr 14, 2017 3:03 pm
Contact:
REFERENCING: Sterling Volunteer, Post #42, Posted Nov 22, 2019
Artificial intelligence will need new rules and regulations to protect society. Both good and bad events can come about as AI further integrates itself into our society at break neck speeds. Regardless of the good or bad and moral or immora...
none

Re: Pitchforks And Torches Will No Longer Be Able To Stop The 1%

Post by Jessica » Sun Dec 01, 2019 3:36 am

A common denominator seen while Artificial Intelligence and Transhumanism are lovingly walking arm in arm down the aisle to a future technological marriage is an increase in inequality, specifically income and economic inequality. This theme of increased inequality is repeated over and over again in the literature. Here in the following articles I will first give an overview introduction of Transhumanism before talkin about their evil love child called inequality.

GREEN EUROPEAN JOURNAL
An Eco-Social Perspective on Transhumanism
By Carmen Madorrán Ayerra, 16 August 2019 https://www.greeneuropeanjournal.eu/an- ... shumanism/

A view of Transhumanism

The concept of transhumanism refers a multiplicity of philosophical currents that explore the possibility using science and technology to go beyond the human species. The transhumanist vision resembles that of many modern utopias in that it critiques our existing situation through the imagination of a desirable future alternative. The difference is that transhumanism is also committed to searching for the most appropriate scientific and technical means to bring that future about. Considered by some as the “defining worldview of the postmodern age”, transhumanism can be understood then as a technoscientific utopia, a worldview that stems from a dissatisfaction with certain aspects of the here and now and which seeks to transform it. As opposed to literary utopia, transhumanism is utopian practice.[1]

Transhumanism makes a series of promises: the increase of physical and intellectual capacities, the elimination of genetic disease, and the potential for personalised drugs and vaccines. A tenet of the movement is that “the first human being to live a thousand years is already living.” In the words of the philosopher Antonio Diéguez, “it’s been a long time since there was a doctrine that showed such enthusiasm for changing reality.”[2]
Increased Inequality
There are numerous problems that arise when considering transhumanism in the context of a global ecological and social crisis such as the one experienced today. First, the problem of accessibility and supremacy is one of the most common counterarguments, as it is reasonable to think that the kinds of enhancements transhumanism proposes would further separate the rich from the poor. It is not a stretch to imagine a future in which enhanced individuals are the ones in positions of power.

Meanwhile, both the irreversibility of changes and the unpredictability of consequences should encourage the exercise of caution. Living in a world in which humanity’s actions have far greater impact and reach than ever before in history should raise our sense of responsibility here. Moreover, the appeal of the precautionary principal grows when faced with the possibility of irreversible changes. As Riechmann has pointed out, it is impossible to ‘un-invent’ the hydrogen bomb or genetic manipulation.
ISLES of the LEFT
Transhumanism & AI: Utopia or a Nightmare in the Making?
September 27, 2018 · Francois Zammit https://www.islesoftheleft.org/transhum ... he-making/

A view of Transhumanism
Transhumanism presents itself as a utopia. It promises advancement and progress beyond imagination. However, the question is: Whose utopia would this be? Will the advanced digital technology bring emancipation from routine, menial tasks? Or will it create a new underclass while helping the elite accumulate unprecedented power and wealth?
Kurzweil argues that the transhumanist project will succeed though the fusion of three components: genetics, nanotechnology and robotics. According to him, through an ‘upgrade’ from a biological body to the one endowed with superior digital or biomechanical technology, humanity can achieve longevity, if not immortality. Biotechnology will provide the means to redesign not only embryos but also mature adults. Body tissues could be rejuvenated through genetic modification, and biotechnology may be utilised to attack and remove cancerous tumour formations. Nanobots would cleanse the new upgraded body from pathogens and viruses. In this transhumanist future, nanotechnology will be there to replace and augment organs with neural implants that will cater for new software downloads and will increase the individual’s neural abilities.
Shelley’s novel, ‘Frankenstein’, poses a question of what might happen if humanity, enabled by scientific discovery, is capable of creating new life. By practicing his newfound technology, Dr Frankenstein equalises to a deific creator—albeit, it isn’t a new Adam he brings to life, but a monster. If applied consciously, biotechnological advances could offer a cure for diseases and longevity, as Kurzweil proposes, but what if it is abused by scientists employed by ruthless corporations or militaristic regimes?
Increased inequality
Although the transhumanist and singularity utopia envisions a leap forward for the human species in total, it ignores the profound social inequalities existing both on the local and on the global level. Taking into account disparity in opportunities between the haves and the have nots, it is a challenge to picture how they would cease to exist after the advent of singularity.
The new technological revolution will instead make most jobs redundant. Whereas machinery like steam engines replaced manual and repetitive work, AI will outcompete humans in the intellectual sphere too, meaning that human workers will no longer have special skills to offer. They can become simply redundant. Hence, the new ‘useless class’ will play no economic role in society. Although we may argue that this does not make them ‘useless’, Harari justifies the terminology by pointing out how disenfranchised individuals will have no function within the new social order; their meaning of life may therefore be compromised.

Here is a broad sketch of what a transhumanist society might look like: the ruling class—the supreme owner of the new technology—will be able to control the manufacturing of goods and the provision of services without the input from the lower classes. By enjoying the access to biotechnology, the ruling classes will continuously enhance their abilities, health and longevity, thus gaining an enormous advantage over the rest. In such a society, the power will be concentrated in the hands of a small elite on the scale unprecedented in human history.

This may well sound far-fetched, yet, given the status quo with all its injustices, the devastating consequences of the rise of AI and bioenhancement seem more than plausible.
Transhumanism presents itself as a utopia. It promises advancement and progress beyond imagination. However, the question is: Whose utopia would this be? As China Mieville once stated, we live in a utopia: it’s just not ours. Utopias for some might could mean a nightmare for others. Will the advanced digital technology deliver a utopia for the majority, facilitating emancipation from routine, menial tasks? Or will it create a new underclass while helping the elite accumulate unprecedented power and wealth? We simply do not know yet. But given the prevalent contemporary trends, the latter seems more plausible.
NEWSTATESMAN AMERICA
The first men to conquer death will create a new social order – a terrifying one
Immensely wealthy and powerful men like Peter Thiel and Elon Musk want to live forever. But at what cost?
By Sanya Varghese, August 25th, 2017 https://www.newstatesman.com/science-te ... terrifying

A view of Transhumanism
In a 2011 New Yorker profile, Peter Thiel, tech-philanthropist and billionaire, surmised that “probably the most extreme form of inequality is between people who are alive and people who are dead”. While he may not be technically wrong, Thiel and other eccentric, wealthy tech-celebrities, such as Elon Musk and Mark Zuckerberg, have taken the next step to counteract that inequality – by embarking on a quest to live forever.

Thiel and many like him have been investing in research on life extension, part of transhumanism. Drawing on fields as diverse as neurotechnology, artificial intelligence, biomedical engineering and philosophy, transhumanists believe that the limitations of the human body and mortality can be transcended by machines and technology. The ultimate aim is immortality. Some believe this is achievable by 2045.
"Transhumanism doesn't have much to say about social questions. To the extent that they see the world changing, it's nearly always in a business-as-usual way – techno-capitalism continues to deliver its excellent bounties, and the people who benefit from the current social arrangement continue to benefit from it," says Mark O'Connell, the author of To be a Machine, who followed various transhumanists in Los Angeles."You basically can't separate transhumanism from capitalism. An idea that's so enthusiastically pursued by Musk and Peter Thiel, and by the founders of Google, is one that needs to be seen as a mutation of capitalism, not a cure for it."
Increased inequality
On an even more basic level, a transhumanist society would undoubtedly be shaped by the ideals of those who created it and those who came before it. Zoltan Istvan, the transhumanist candidate for governor of California, told Tech Insider that “a lot of the most important work in longevity is coming from a handful of the billionaires...around six or seven of them”.

Immortality as defined by straight, white men could draw out cycles of oppression. Without old attitudes dying off and replaced by the impatience of youth, social change might become impossible. Artificial intelligence has already been shown to absorb the biases of its creators. Uploading someone’s brain into a clone of themselves doesn’t make them less likely to discriminate.
that a transhumanist society would inevitably lead to “people lording it over others in a way that has never been seen before in history”. It doesn’t take much to guess who would be doing the "lording".

“The first enhanced humans will not be ordinary people; they’ll be the people who have already made those ordinary people economically obsolete through automation. They’ll be tech billionaires,” says O’Connell.

If those who form society in the age of transhumanism are men like Musk and Thiel, it’s probable that this society will have few social safety nets. There will be an uneven rate of technological progress globally; even a post-human society can replicate the unequal global wealth distribution which we see today. In some cities and countries, inhabitants may live forever, while in others the residents die of malnutrition. If people don’t die off, the environmental consequences – from widespread natural resource devastation to unsustainable energy demands – would be widespread.
Times Referenced: 0

Posted on: » Sun Dec 01, 2019 7:58 am #44

User avatar
Doctor A
Volunteer
Volunteer
Posts: 62
Joined: Thu Oct 15, 2015 2:30 pm
Contact:
REFERENCING: Sterling Volunteer, Post #42, Posted Nov 22, 2019
Artificial intelligence will need new rules and regulations to protect society. Both good and bad events can come about as AI further integrates itself into our society at break neck speeds. Regardless of the good or bad and moral or immora...
none

Re: Pitchforks And Torches Will No Longer Be Able To Stop The 1%

Post by Doctor A » Sun Dec 01, 2019 7:58 am

A major force creating economic inequality is the coming technological singularity and within this domain is the concept of convergence. This is where all of the existing technological fields are not only individually advancing exponentially but are also interacting with other exponentially advancing technologies in new and synergistic ways; this pushes the speed and complexity of the dawning new technologies to unimaginable heights.

The technological singularity can be defined as "a hypothetical point in the future when technological growth becomes uncontrollable and irreversible, resulting in unfathomable changes to human civilization. Due to the exponential processing power of the coming super computers, a technological singularity will occur. Advancing at inconceivable rates, it will overwhelm our human capacity to understand it." Although this produces an exponentially accelerating growth curve of technological development in total, this curve is really made up of a multitude of smaller curves all individually accelerating exponentially in their own right. Meanwhile, it is the various interactions of these smaller curves in various combinations and permutations that produces the final outcome. This interaction of a technological field interacting with many other various fields is an example of convergence.

Singularity University
Progress is About the Convergence of Technologies
No Technology Thrives Alone
Nov 29, 2016
https://medium.com/singularityu/progres ... c01c6ea446
The Law of Accelerating Convergence

The strangest, most interesting and magical-seeming creations of the future will occur at the intersection of multiple exponential trend lines. You might call this the law of accelerating convergence and can summarize it as follows:

As technology continues to exponentially accelerate, the interactions between various subsets of exponential technology will create opportunities to slingshot past the already breakneck speed of accelerating change in ways that are even stranger and more difficult to predict than the path of any individual exponential technology.

If we look at any singular outgrowth of exponential tech and focus solely on it, we’re missing the vast possibility space of the ways technology is about to reshape the world.

This is a tough concept to get our heads around.

Even trying to work through the ramifications and implications of a single exponential technology requires diligent thought and the willingness to take intellectual risks. Trying to comprehend how they’re all going to affect each other is six shades of impossible.

After all, what’s more important: artificial intelligence or biotechnology? What is going to have a bigger impact on the world: nanotechnology or solar energy? These questions don’t have easy answers. There’s an insidious assumption hidden within, which is that different technologies operate independently of each other. But in practice, they don’t. The importance of biotech might hinge on a crucial development in artificial intelligence. A new solar breakthrough could come about by applying concepts from nanotechnology.

The only way to know the future of virtual reality is to study the future of artificial intelligence. The only way to know the future of 3D printing is to study the future of biotech. The only way to know the future of energy systems is to study advanced materials design.
To gain an appreciation of the magnitude of the interactions involved, here is a list of some of the individual current day technologies and related fields involved in this process. These in turn will create even more newer technologies all interacting and accelerating at an exponential rate. The complexity and speed of this giant technological web will be unfathomable.

3D Printing

Artificial Intelligence

Augmented Reality

Automation

Big Data

Biotechnology

Blockchain

Brain-Computer Interface

Computing

CRISPR

Drones

Energy

Entrepreneurship

Environment

Ethics

Finance

Manufacturing

Medicine

Tradecraft

Food and Agriculture

Health

Gadgets

Genetics

Innovations

Internet of Things

Longevity

Nanotechnology

Neuroscience

Robotics

Space

Stem Cells

Virtual Reality

Any one of these areas alone, moving at an exponentially accelerating rate, would create a future avalanche of new possibilities let alone the convergence of all of these interactions taken together. What might happen boggles the mind.
Times Referenced: 0

Who is online

Users browsing this forum: No registered users and 1 guest