The technical schemes that we create are getting ever more complicated. We do not fully understand them ourselves.

If you find this article too long or too difficult, you can read just this part and understand what it was written for and what it is all about.

Martin J. Rees, global risks' analyst, believes that the likeliness of the humankind survival in the 21 century is 50%. It's like casting a coin… And he only considered "suicidal" variants, like a global war, environmental crisis, man-caused catastrophes etc, excluding external threats like an extraterrestrial attack or even a meteorite fall. In other words, the probability is calculated based on the well-known "global challenges"

The main idea of this article is that global challenges are easier to understand and thus more visible cases of general process of the ever increasing complexity of systems, that the humankind has to deal with, and the increasing complexity of the humankind itself as a system. Therefore the environmental crisis may be related to the fact that ecological problems, as a rule, require collective decisions, that are practically unachievable nowadays because of a critical growth of complexity of our civilization, as well as of disproportional increase of the of complexity of the ecosystem itself as compared with other systems that we can work with. The ever higher level of militarization is another consequence of the sophistication of the systems; for example, due to the new powerful and more available technologies, the number of actors, potentially harmful for the civilization has increased critically. The increasing possibility of a man-caused catastrophe is obviously the result of the over-complexity of technical systems that we create.
Following my definition, within the framework of this article "complexity of the system" will mean the number of elements in this system, the number of connections between them and the "share of hidden", not evident and available connections and elements, existing in another physical or ontological space.

Thus, for example, the system of communication of two internet-servers may seem rather complicated because of the number of elements and connections, but all these transactions can be detected and protocoled, and with due desire ( the desire to have an extra star on one's shoulder straps, for example) this system of communication can even be fully immobilized. It means that after all the system is not that complicated.

On the other hand, say, the system of communication of the two neighboring ant-hills (from the point of view of humans) is very complicated as it includes the unachievable – underground – communications, i.e. those in another physical space, as well as – as far as we know (and indeed, we do not know really much) – the system also includes communications and connections of a different nature – the chemical one, like the exchange of scents; in other words, the communication models, referring to another physical ontological dimension. We cannot fully understand nor ( fortunately) immobilize this communication system.

In this article I am going to analyze the evolution over time of the three systems (systems' groups) based on the above offered formal division of criteria of complexity. They are as follows:

  • Our civilization;
  • Technical systems created by our civilization;
  • The systems that our civilization will have to deal with.
However, another definition is required: hereinafter by "our civilization" I mean the group within the humankind characterized by common global culture, system of knowledge and worldview that I consider myself a part of, and as I result, I can speak on its behalf. In other words, this is the most global of the global civilizations within the humankind. Conventionally, it can be defined as one having a basic academic worldview and a system of international connections, like products and information exchange. I believe it is obvious, that this civilization is not the one and only (the same refers to the relevance of this theory): for example, the worldview, the system of values of local peoples from traditional cultures differs so much that – based on this criteria –they must be referred to other civilizations, that I cannot consider myself a part of, because of total lack of experience of living within them. Let me add, that the readers of this article (at least, on probability theory) in my classification refer to the same global civilization and thus the term "our civilization" in my understanding is appropriate.
Definition of terms
Increasing complexity of our civilization. Crises of institutes of governance.
Having introduced the first criteria of complexity – the number of elements of the system – we can show the new ones, that have emerged in the recent decades, namely, decolonization ( the number of the UN member-states has increased from 50 to 193); collapse of the bi-polar world, and as a result, the growing amount of geopolitical actors; emerge of some new uncontrollable groups, both horizontal – hackers, bio-hackers, other activists – and vertical – terroristic, marginal religious organizations; emerge of the new global actors— transnational corporations.

Some principally new, non-human actors have got into the game as well, namely the algorithms of deep learning of neural networks( the closest to what we would call "artificial intellect" of science fiction). I quote just a few two year old "news" :

Whether it can be considered genuine art, is a philosophic or even rather a religious dilemma, that has to do with our understanding of the picture of the world and convictions;I consciously leave it beyond the framework of this article; however it is already clear that the humankind has lost its monopoly on creating something principally new.

Another criterion of complexity is the number of connections within the system. For our civilization: those would be transport networks (mass civil aviation, container transportation etc) , information networks (golden era of TV, emerge of Internet, new media and, finally, social networks and messengers).

For example, in the beginning of 2016 Facebook published a research of the number of "handshakes " (the term used in the famous "theory of six handshakes "). A graph for 1,59 billion social network users was relevant for the research period. The average degree of separation (and this is what is meant by the number of "handshakes") appeared to be 3,57. Back in 2011, for 721 Facebook users this parameter was 3,74; it means that even within those 5 years our civilization has become more integrated.

Emerge of unpredictable, chaotic points of cristallization (hackers'and biohackers teams and innovators in general) can be referred to the third criterion of complexity, namely, the rate of change of elements and connections within the system . Besides, the very lifestyle has changed: nowadays it is quite normal to change not only your family and job, but also the country of residence. The choice of the higher education is no longer the life-determining decision; knowledge and skills get outdated and people in the professional part of their lives have to get involved in multiple communities. The above-mentioned transnational corporations may turn out to be the yesterday's start-ups.

And finally – the non-evident connections in the system. Modern world may seem transparent, but with numerous operations leaving their digital traits even the most ardent and powerful Big Brother would (fortunately!) fail to fix and moreover manage the system of connections in our civilization. As it was mentioned above, self-organizing groups have their horizontal connections that change so quickly and sometimes are so well encrypted that in order to follow them one would need either a proportional number of supervisors ( one cannot help recollecting a bitter irony about a country with 50% inhabitants imprisoned and 50% guarding them, but fortunately this is not yet true for the whole of the world) or an astronomical computation capacities; the latter will have to deal not that much with what they have for the present moment ( the information is most likely to be outdated by the time the analyses starts), but rather with forecasting. Computation forecasting of complicated systems was for the first time described by Isaac Asimov in his "Foundation" and as yet - both for our reality and for Asimov's reality - they do not seem very likely for the present.

One more factor, that makes the connections within our system still less evident, is worth being mentioned. It is the decentralization, included that on the structural level. Peer-to-peer systems like block chains, torrents, Tor browsers etc make it possible to exchange information (and resources) fully outside the centralized infrastructure. The picture below is a beautiful illustration to this principle:

CC 2.0 Kyle Harmon Adapted
Here you can see the peer-to-peer messenger FireChat covering the territory of the Burning Man festival (by the way, it's another case of self-organization). FireChat — is a cyber-anarchist project that allows to communicate without any external infrastructure (if the government, say, switches off the cell towers around the square where the meeting is taking place). The data goes straight to the Bluetooth or WiFi via intermediate devices.

But let's get back to global challenges. As it was shown in the introduction, the complexity of the solution is due to the growing complexity of our civilization. But how does it work? The problem is, that the complexity of composition and the logics of action of the institutes that are to meet those challenges (e.g the UN and national governments), at least in compliance with their declared missions, is obviously lower than that of the system that they govern — namely, our civilization. Their hierarchical implementation logic is incomparable with the complexity of horizontal self-organizing processes. It is not surprising: the UN, for example, was established first of all to provide peaceful coexistence of people after the WWII and prevent WWIII; they seem to have coped with this task - if the WWIII breaks out, it is unlikely to resemble the previous one. The governance system of the UN has not changed since the days of its foundation and corresponds to the XX-th century political governance style. The UN Convention on the Rights of Persons with Disabilities – the latest one adopted as for today – had been under negotiation for about 20 years; the UN is regularly criticized for inability to promptly respond to emerging crisis. In particular, I mean Boko Haram in Nigeria, Ebola epidemic in Africa, the Haiti earthquake and other crisis. The world has become much more complicated since mid-XX century. Thus the number of actors with nuclear weapons is limited to 9 states and that number can be controlled within the logic of hierarchy (though it is not always easy, either). In order to possess nuclear weapon, bypassing the UN approval, a state will have to invest astronomical amounts of money into research and still more into education system within a couple of decades. Bio-laboratories, that can produce bio-weapons based on genome-editing techniques, however, can be set up in any medium-size university or even in an average bio-hacker's garage. Researchers from Alberta University (Canada) have demonstrated how one can create a dangerous virus (horse pox) using only commercial materials for genome editing. A basic set like that (not the one, the Alberta University researchers must have used) costs $149 on Amazon:
Thus the number of potentially harmful actors has grown by an order of magnitude.

The situation with ecological crisis is no better. The UN for already several decades has been trying in vain to come up with consolidated action in vertical logic, e.g. in such a specific and relatively local situation (compared to the atmosphere composition or global warming) as the plastic stain in the Pacific. Any negotiation and decision-making is very slow and inefficient because the sophisticated system of modern states fails to agree about who is going to remove wastes produced by all and for whom. It is worth mentioning that NGOs in this situation appear to act much more efficiently, thus demonstrating a better ability to act in complex systems, but even NGOS ( especially the biggest ones, the international, that are to contribute to the solution of this problem) are rather vertically organized and bureaucratic. Anyway the plastic stain is still there. I took a photo of this still life on a Russki island beach, (near Vladivostok). My personal experience, of cause, cannot be a proof of a civilizational scope process, but I was writing this article simultaneously with taking the photo and could not but include it here ( see below)

In cybernetics there is William Ashby's "Law of requisite variety; in a nutshell it reads as follows: "for the efficiency of governance the variety of the governing impact must be no less than that of the governed system ". Variety is a component of complexity and thus the Eshby's law shows the inconsistency of the methods of governance of institutes with the complexity of systems to be governed by them.
CC Peter Levich
Let me start by illustrating the number of elements in some technical schemes (the number of connections in the schemes must be a correlating parameter):
The growing complexity of technical systems that are being established/ The risk of management crisis.
All these systems seem quite complicated and they all are created by man. So what is this example about? Beginning with cars none of these systems could not be created by an individual or a group of individuals. Only a group of individuals in combination with computer engineering programs, data bases and computation capacities is able to create such systems. And what is more important, a human being is not able to repair them without computer support systems. He will not even understand what is wrong. For modern cars it is still more or less possible, if the damage is obvious, but beginning with airplanes a man can do nothing without a computer. The complexity of these systems is beyond human capacity. But so far it is on the verge of our understanding – we are still able to explain how they work and having received information about the damage through a computer, we can at least decide what particular repair works are required. As for the two remaining complexity criteria — dynamics of changes of connections in the system and the degree of implicitness — these systems are quite simple: the connections are permanent (though numerous) and they are all well-known (though only to the computer).

But what is the most sophisticated technical system created by our civilization? What is our technical pride? Let's assume, it is the Large Hadron Collider (LHC). I failed to find the data about the number of its components, so this illustration is my value judgment:
It took international research groups many years to create this system, obviously, using engineering computer software and huge computing capacities. Do we understand this system? Probably, yes, we do, but it is rather a group understanding – a distributed understanding: in other words, only a group of people can fully explain how it works, one person may have but some basic understanding of its principles. Can we repair it? Two weeks after the official launch of the LHC (by the way, it took us 22 years to implement the initial idea), there happened an accident, and it took 15 months to cope with it. For the following 5 years the experiments were only made on lower energy, than for 2 more years the Collider was being modernized and no experiments were carried out at all.

By no means would I underestimate the input of scientists and engineers working with LHC. On the contrary, I want to say that this may be the most sophisticated technical system ever created by man! It is also the most vulnerable one because any deviation from the initial model requires huge timing budget and computing capacities to understand the problem. The system still has permanent connections (with the accuracy up to quant effects, that are already important here) and theoretically, has no implicit connections; in reality, however, the implicit connections do occur, and depending on how the components of the system function, they may appear so multi-factor that one cannot foresee all potential cases, even using the computer calculations.

But let us get dawn to the systems that fully meet the requirements of complexity criteria, in other words, those dynamic and implicit connections. We will pay attention to a principle change in computer programming: transition from determinate to evolutionary algorithms. The famous algorithms of deep learning of neural networks refer to the latter. Deterministic software has successfully won chess games against humans but only the evolutionary program could win a GO game. A GO game allows for many more scenarios, it has another level of complexity, and judging by the fact that this particular game appeared to be the stumbling stone of deterministic algorithms, this is a principally new level of complexity. In order to win the software was to obtain an additional degree of freedom, the freedom to learn by itself, to choose the strategy and change its algorithm. In other words, a technical system has obtained the very third criteria of complexity, namely, the ability to change the connections in time. And by the way, it takes this software much less time to learn than it would have taken a human being. As a small addition ( men would rather not like it to happen) they have obtained the forth level criterion of complexity, the implicit connections. Self-learning algorithms work as a black box and cannot explain their choice. They can only explain that the decision for this particular situation was taken based on a billiard case sampling frame. And this is where our understanding of technical systems (created by ourselves) ends. We can neither repair this system nor predict its behavior. Dealing with other similar system they invent a language that we do not understand. And this example will be especially important for the final part of this article.

The picture above gives three constructions of the same function. Н It is not the function that matters, but the fact that all scope vectors and other characteristics following the TOR are the same. The left one is man-projected, the right ones are the product of computer optimization ( the further to the right, the nore was computer "allowed " to amend) The rightest one can cope with the same weight as the left one(man-made). But it is 75% lighter.

It would be appropriate to mention that specimen of creative activity of algorithms referring to new actors in our civilization (as yet) do not refer to geopolitics and civilizational processes. I will quote another, more gloomy case, that refers to this sphere: SKYNET. No, this is not what you may mean, though the difference is not that important: this is the NSA (US) software to analyze Big Data of mobile communication in Pakistan. It uses the algorithms of machine learning to assess the probability of any mobile-phone user citizen in Pakistan to be a terrorist. Based on the data provided by the software the citizens were eliminated. For the completeness of the picture, they were killed by drones. According to the Bureau of Investigative Journalism since 2004 from 2500 to 4000 people have been killed by drones. Most of these victims have been classified by the US government as extremists.

We do not know since what particular moment the algorithm of machine learning has started, but they began developing it back in 2007. The system has all profiles of the users and can detect their movement. Switching off of a mobile phone is considered to be an attempt to avoid the monitoring, exchange of sim-cards with another person is another suspicious marker ( the program can recognize the exchange if there is a discrepancy between the location of a sim-card and activity of its user in social networks). NSA was strongly criticized for the use of this program, in particular, because they had no other evidence but for "that's what the software shows". But as it was mentioned above, the problem with such programs is that they cannot explain their decisions. Thus it is a fully-fledged case of a non-human agent that directly influences human lives, uncontrollable and – what is more important -incomprehensible to anybody, including its designers.
Let us get back to our graph of complexity of systems. We have discussed the LHC buteven the LHC is by an order of magnitude more simple than such system as consciousness. There are about 86 billion neurons in human brain saying nothing about the number of connections (one neuron may have up to 20 thousand connections with the others ). It is estimated that a human brain has as many "transistors" as the whole of the IT infrastructure on the planet. The connections change un time (e.g. in my brain in course of writing this article and hopefully in yours after you read it); they are not evident – human brain has not yet been mapped up to neuron; the has definition of the latest research is 20 micro IU (50 times more accurate than the previous one) , while the size of a neuron varies from3 to130 micro IU.

The increasing complexity of the systems one has to deal with. Methodology crises.
Thus the phase barrier of complexity on our conditional map can be place somewhere in between the LAC and consciousness. I call it the phase barrier, because it takes a principally new method of working with systems to pass it. Previously the increasing complexity of work with systems would require higher computation power, nor human or other resources Let's recollect the example of similar number of elements ( not even connections!) of a human brain with components of global IT infrastructure. You can't use LEGO logics here. William Ashby's "Law of requisite variety is quite applicable here: computing system must be not less complicated than the brain. Can we use the whole of the global infra-structure to study a human brain? And what if we double it? Theoretically, yes, but not in the near future, as even the Moore's Law has slowed down, due to quant effects. Another option is to change the methodology within the research: using a holistic or integral approach instead of multiple Lego cubes. On the other hand the very quant effects may turn out to be useful — a quant computer may appear to be able to work with systems of much greater complexity. But first of all, it is not yet for sure, and secondly, transition to quant computers may also be considered a phase transformation.

Let us make it still more complicated. Let us take the global ecosystem. I do not know exactly how many elements are there, but just a million species of insects, saying nothing about the real number of individual insects, makes it several orders of magnitude higher in this scheme (plus, the conscious of the living creatures – see the previous point – is it's integral part as well).

This system has all criteria of complexity as well : the connections are dynamic, a part of them ( may be the bigger part) is not evident. Let me remind you that by not evident (implicit) connections I mean the connections that exist in another physical or ontological space. Referring to such implicit connections – see the — BBC video about mycelium as an "Internet for trees" – and this is quite a compliment to human Internet, because it is much more simple a system that mycelium, and it refers to another physical ( underground) and other ontological (other Kingdom, other principles) space . And before you watch this video, try to visualize the detail mentioned above, that had been calculated based on the algorithm.
In Waldorf schools they have given up LEGO constructors in the learning process, because there had been some cases when a kid would turn off birds' wings and legs because they did not understand why everything could not be assembled back, just as they had done on the lesson. One could say that the silly child did not see the difference between a living creature and a constructor! But no, this was a smart child, who could apply his experience received in one sphere to another one, and silly are we, the whole of our civilization, because we, adult and smart, do not know the answer to this question.

In fact all our technological success is based on a Lego constructor method. We take the nature apart - ore, wood, water - and then bring them together following a certain algorithm (that may be quite complicated and include numerous elements and connections) and as a result we get , say, a car. This method is convenient to translate — if another person on another end of the planet follows our algorithm accurately and without mistakes, he will get a similar car. And our civilization has made a technological breakthrough due to this method.

But this logic is only applicable to systems below a certain level of complexity. Already with the bird it does not work, nor does it work with all living systems. It does not work with conscience: we do not understand where, after which the number and the level of neuron interaction it starts and thus we cannot work with it as with a constructor ( this is an excessive example, because our consciousness at least has living system characteristics, but it is important for my further deliberations).

Emergence may be an applicable term to define this border line (divide) for the complexity of systems. Emerging systems are the systems whose elements do not possess certain qualities in isolation, but the system as a whole does acquire them if the elements are specifically connected. That is an emerging quality. A neuron has no consciousness, but the brain has. Living is a bit more complicated notion; a living body consists of living cells but if we go back to the molecular level, we will see the same picture: molecules have no qualities of a living organism, while a cell has.
Let us consider the case with the forest in more detail. I will leave out the detailed description of working with other emerging systems, e.g. with the consciousness. Why can't we work with the forest as we do with technical systems? Hidden connections do not allow us to have a common language field with this system. I mean not just a verbal or software language, but the interface-communication language in general. Say, now I am in my hotel. The language is clear for me: I know how to use the conditioner control panel, how to open the window etc. The language of the forest is incomprehensible for us. Implicit connections are followed by a still deeper level of obscurity – our activity rhythms differ. We think that nothing is happening in the forest. The branches are swinging in the wind…what of it? Few of us have ever seen a tree falling (not when we cut it, but by itself) . Rationally we know, that the wind changes, it grows, dies and regenerates; but this rational knowledge does not at all promotes the synchronization of the rhythms. And common language is only possible if the systems have common rhythm. That is why the more our civilization accelerates, the more difficult it is for us to understand the forest. As for the present, out language of communication with the forest varies from an axe to "afforestration", a sort of planted neat rows of trees, that have nothing to do with real forest as far as the level of complexity is concerned. This case, by the way, can broaden our understanding of complexity: implicit connections may refer not only to another physical or ontological dimension, but to another temporal dimension as well, it may have another rhythm.

Our civilization today is facing the need to work with systems of this level of complexity, with emerging systems, but we only have Lego-methods. We have to somehow repair the global eco-system, that we have disturbed, but in most cases we are not even able to understand a local eco-system, say a permacultural vegetable patch.

We have been rather successful with Lego-methods for prolonging human life ( our civilization has chosen it as one of the goals proportionate to other global challenges, but let us not consider this matter in this article)^ we have learned how to replace the "broken" organs, cybergiolize the body. But no the humans are approaching the processes that reveal the very complicated emerging characteristics of living systems: consciousness (Azheimer), complex chaotic and unpredictable processes in the body ( cancer and aging as such). And this is the current complexity barrier that the bio-medical frontier gets stuck on.

Peter Levich & Deep Dream Generator
I do not want to finish on a quite pessimistic note. On can notice a gradual change of the discourse: for example, "Amazing life of plants" , quite a popular book today. Of course, we are only approaching to understanding the world that has been hidden from our civilization. There might have been some sort of understanding it, but it is mostly lost. And this "understanding" was not fully mental, the meaning of the word is different from what it implies today. So for us, as a civilization it is a new try, with an updated background.

And here I will disclose the meaning of my hint, though an attentive reader might have already understood it. There is a lot in common between my description of deep learning technologies of neuron systems and living systems. It is, in fact, not surprising if you just look at the title. But it does not matter how you name it, let us see the similarities: we see the level of complexity, implicit connections included; the language that we do not understand; by the way, even the rhythm is different, as it was in case with the forest, but this time the other way round – much faster for the software. This resemblance may allow us to hope that evolutionary algorithms may come to understand the living systems. However not only they are to understand it, but also explain it to us, otherwise we will find ourselves in a situation that a new layer of incomprehensible will appear above the previous incomprehensible matters. What is important for me here is that these technologies potentially are able to work with emerging systems.

The search of methods of working with emerging systems ( be it in the past or in the future), however, does not go smoothly and takes time, but there is hope. For me the problem is that our civilization has but a very fragmented picture of the universe and creates new technologies based on this vision; the complexity of this technological discourse is incommensurable with the complexity level of the systems that we have to deal with; in particular the problem reveals itself in the goal-setting for new technologies.

Thus there are two questions left:

  • Assuming that other civilizations or other cultures have a method of working with emerging systems, how can we learn from them?
  • Assuming that in theory the technologies can work with emerging systems, what ontologies are to be combined with the modern discourse of their design and how?

Material creators
Peter Levich
Andrey Lukov
Olga Fadina
Support our project!
The whole amount of your donation
will be aimed at developing the project
and support for the creators of the article.
Bitcoin Cash

This article is made possible by:
  • The Protopia Labs movement and conversations with Pavel Luksha, Ivan Ninenko and Mikhail Kozharinov;
  • Conversations and the general existence of Alica Dendro in my life;
  • The event "Immersion" Ecophilosophy. On the verge of "and conversations with Igor Polskiy;
  • Conversations and practices with Grigory Chernenkov, Dmitry Senin and many others;
  • The event "Immersion in traditional culture" of the EcoPotok of the Metaversity and conversations with Dmitry Zakharov;
  • Event "Immersion Forest"; Conversations with Sergei Medvedev, Andrei Dunaev and Alena Svetushkova