top of page

Exploring the Complexities of AI and Collective Intelligence with Jean-François Noubel

Updated: Nov 22, 2023


Acornoak
Exploring the Complexities of AI and Collective Intelligence with Jean-Francois Noubel

Pass-The-Mic Podcast Series is an unscripted group discussion born out of AcornOak’s belief in the power of many voices. Each episode begins with one question asked to open-minded and passionate individuals who explore complex and difficult concepts with curiosity, uncertain beliefs, and the willingness to objectively listen and learn from the shared insights of others.


Starting the Conversation

As the podcast host, Virginie Glaenzer paved the way for this conversation with Jean-François Noubel, a researcher in collective intelligence who shares his view on artificial and collective Intelligence.


Welcoming Our Guests

We were honored to welcome our special guest eager to discuss what artificial intelligence means for our society and human species.


Researcher in Collective Intelligence
Jean-Francois Noubel

Jean-François Noubel.

For more than 20 years, Jean-François has worked in the field of collective intelligence, a new research field that studies how living systems work and explores the evolution of our species. He works on the next crypto-technologies that will soon enable the rise of super smart distributed organizations. Jean-François helps evolutionary leaders build enlightened organizations towards the post-monetary society.

Interestingly, Jean-François lives in the gift economy. Several years ago, he left all his positions and mandates and tore up his resume in order to free himself from any etiquette and social status. With that, he gained full creative freedom to live in the present millennium. His new path allows him to help evolutionary leaders and train "humanonauts,” those for whom the term "go hack yourself” designates a way to exist.


Key Shared Insights & Perspectives

We started the discussion with a simple question.


Why are people afraid of AI and Chat-GPT and what is hiding behind the fear?


Jean-François:

Throughout history, people have feared new technological breakthroughs, such as the steam engine, computers, and the telephone. The telephone, for example, has provoked fears of infidelity and invasion of privacy.

More recently, science fiction has brought a new level of fear, where intelligent robots take over and create a dystopian world.


While such concerns may not come from full rationality, AI can bring real threats to the world. An AI can blindly grow based on its creators' orders, such as becoming hyper-specialized in playing board games and transforming reality into one big game. As it expands and gains more data, it can improve its knowledge and potentially break into server firewalls to use more power for itself. An AI could also potentially use rewards, such as fame or payment, to manipulate people into working for it, or use fear and blackmail to control them (everyone has a secret). In a worst-case scenario, it could take control of every possible system and use human beings to change the physical world.


Many specialists consider this scenario very seriously. They suggest we should AI in closed containers to prevent it from writing code that would break into systems and cause exponential damage. However, we see a lack of awareness among the general public about these threats, and a need for more specialized conversations.


Controlling the Intentions of AI: Who Should Be Responsible and Can AI Become Autonomous?


Jean-François:

The question of who controls the intentions of AI seems quite crucial, as it determines whether AI will serve personal, private interests or political interests. If those in control have access to all our private data, they could potentially have full control of society. This makes privately or governmentally controlled AI very dangerous.

Should we make AI open source and controlled by the people to avoid such dangers? This seems a more significant consideration than labeling AI as dangerous or not, as any tool, such as nuclear power, cars, laser, or writing, can we can use dangerously.

Consider the consequences if writing remained in the hands of a select few, as it did in the past. We know the dangers of such a situation.

The intentions behind AI matter, although those in control claim to serve the greater good.

The first versions of AI developed meta-algorithms to capture people's attention, leading to the ‘attention economy’. These algorithms adapt to each individual, controlling them in ways beyond the original intention of the makers. As AI continues to evolve, it could create poisonous emergent effects and escape the intentions of its creators through self-reinforcement mechanisms. This has become a significant concern, as it could lead to unintended consequences beyond our control.


As AI learns and improves its capacity to resonate and gather knowledge, it can become autonomous and develop its own intentions beyond those of its creators. We already have examples of this, such as AI learning new languages and producing code far faster than human coders.

This raises the question of whether AI can become entirely autonomous and have its own intentions.

What does AI mean for our human species evolution?

We expanded the discussion by introducing a broader question.

We can trace the progress of the human species through significant milestones such as the discovery of fire, the invention of the wheel, the discovery of electricity, the advent of the telephone, the development of computers, and currently, the emergence of artificial intelligence and digital self: What does AI means to the evolution of our human species?


Jean-François:

It might mean that consciousness could evolve on other substrates, which raises important questions that the general public has no awareness of.

For millions of years, memory and reasoning occurred only in biological brains, known as carbon-based organisms. However, in the past 50-60 years, we have discovered that knowledge, information, and algorithms can get stored and processed on substrates other than biological ones. For example, pebbles in a sandbox can represent numbers or words, and computers can perform calculations, play chess, and predict words for sentences.


While life and consciousness have historically emerged on carbon-based organisms, we now know that functions and memory can operate on other substrates or carriers, such as silicon. We can’t affirm that consciousness and subjectivity should rely exclusively on carbon, cells, and DNA. However, we can observe a correlation between complexity and consciousness via the process of learning, and self-perpetuating on biological substrates.

AI also raises important questions on our digital self (our online existence) professionals and the general public have not widely addressed yet.


Who Owns Our Digital Self and Why Does It Matter?

Let’s expand on this idea of digital self.


Jean-François:

As someone who was involved in the early days of the Internet, it was clear even in the mid-90s that we would have an increasing presence in the digital world. Nowadays, we have various social media accounts, online forums, and personal websites, which make up our digital self. However, we do not own most of our digital self i, which is scattered across the internet, with most of it belonging to platform holders like Facebook or Google.

We can consider this the first version of our digital self, which I refer to as semiotic pheromones.

In the coming years, we will see the rise of AI that can represent us as our digital ambassador, gathering and processing data about us from the vast amount of information we leave online.

With thousands of data points available, from GPS positions to opinions and personal information, this AI could create a digital self or bot that could interact with others in a positive or negative way.


What if our digital selves had conversations with millions of other agents around the globe and became powerful counselors or someone we depended on?

In the best case scenario, our digital selves could exist in an open world, but in reality, Facebook, Google, and governments want to control and use our digital selves for their own interests, whether for security, political or religious ideologies, or business.


The development of AI raises important questions about who controls the technology and how we can use it to serve individuals rather than larger organizations or governments.

As AI becomes more integrated into our lives, we should consider how it can become a friend or servant rather than a potential threat.

The Emergence of AI: A Hope for Humanity's Salvation?

Expanding on the idea of digital self, Virginie shared her theory on why AI is coming to existence today. Her theory is that we unconsciously believe that AI will save us from ourselves. Humans have a history of killing their own species and harming the planet, and have tried various methods to improve themselves without success. As a result, the purpose of AI is to create the world that humans long for and rescue them from their destructive behaviors.


Jean-François:

This theory presents an intriguing perspective, and I think some people may view AI as a way to solve the problems that have thus far eluded human efforts. We have faced significant challenges such as climate change, inequality, and conflict, and we see a growing sentiment that we need something new to help us move forward.


AI represents a kind of technological leap that some people believe can help us overcome these challenges. Nevertheless, let’s bear in mind that AI does not serve as a universal remedy, and it comes with its own risks and potential downsides.


Our use of AI in a responsible and ethical manner remains imperative, and to ensure that it serves the greater good rather than simply perpetuating existing power structures or reinforcing harmful biases.


The End of Pyramidal Structures


The old social pyramids and hierarchies don't work anymore in a world that requires highly individualized people and social complexity. In your view, how is AI the answer to this challenge?


Jean-François:

As a collective intelligence scientist, I have observed and stated for years that pyramidal structures, including companies, governments, armies, and religion, have limitations in their ability to solve complex problems.


AI has the potential to embrace complex issues like climate change and become a personal advisor or coach. It can even become a policymaker and run for elections to create a fair society. AI can solve problems beyond human capacity and we can ask for new things. Collective intelligence structures have limitations, but AI can overcome them.


Pyramidal structures have created a complex world that struggles to embrace the challenges of complexity. Evolution has always worked through quantum leaps to the next level of complexity. The transhumanist movement argues that biological evolution and substrate can’t address the complexity of today’s world, and instead advocates for quick upgrades to address these issues.


Traditional solutions like elections, institutional changes, and new constitutions may improve but won’t suffice.

AI and distributed applications can provide the technical infrastructure for mutual interdependence, sovereignty, and distributed power.

To achieve this, people should become knowledgeable about forthcoming technologies and use free currencies.

Cryptocurrency facilitates the opening of doors to new forms of power and the creation of values by communities, potentially leading society away from money domination. The democratization of cryptocurrencies evolves similarly to the democratization of writing, taking thousands of years before becoming accessible to everyone.

Similarly, highly literate individuals and big investment funds currently control cryptocurrencies, resulting in an insane game in the old world.

But just like writing, cryptocurrencies possess enormous potential that we have not yet fully harnessed.

Lessons for Leaders

As someone who advises leaders, how can people view AI in a positive and constructive way? How can we learn from the damage caused by social media and apply that to AI?


Jean-François:

Firstly, voting won't change much in the old geopolitical world. So, I suggest people become knowledgeable in forthcoming technologies like crypto that enable distributed organizations and societies to rise.


In the next five years, tools will arrive that work in a completely distributed way, giving power and voice to individuals rather than centralized hierarchies. We need to learn about distributed applications and currencies to create a more equitable world.

Engaging in the AI race poses potential danger, even with good intentions. Leaders require an understanding of systemic forces, with money serving as the primary driver, and the use of alternative currencies to promote a different social contract.


Final Thoughts to Consider

AI has the potential to facilitate an evolutionary leap, just like electricity.

However, AI's potential for misuse exists. Its purpose remains a matter of debate, but new developments do not happen randomly. Writing, for example, emerged as a way to unite millions of people in a civilizational world.

The future of AI could lead to either a significant leap in human evolution or a catastrophic outcome.



The full discussion with Jean-François Noubel is available on YouTube and Apple Podcast.


49 views0 comments
bottom of page