Leaving the Web2.0
“It will not be all right, if everything goes as before”
Natalino Balasso – Italian comic actor
Modern Web, or Web 2, presents many pitfalls from which users currently are often unconscious. The historical evolution of mass medias led us through many phases, trying step-by-step to catch up with problems that used to show up on earlier stages. Being able to see these problems in advance is never easy because it is a pioneering work which tries to make the users able to figure out what the next Web generation might, and probably is going to, be. Then, a more open state of mind is required, in order to accept the idea that behind what we use and see every day there might be some dynamics that, despite not being so evident, still are very important and influent.
Let's have a look to some of these together and how we in Etherna believe we can deal with them, to keep the mass media evolution going on.
An historical point of view
Before the coming of the Internet there was a period when newspapers and television were by far the most dominant media of mass communication. Information was monodirectional, from few to many, and there was no way to publicly reply to an article, if not through open letters to system operators, who should have had to publish them, or through public manifestations. Basically, there was a monopoly of information, which was so uniformized on the kind of message that the famous writer Pier Paolo Pasolini said, paraphrasing, that ''the TV medium succeeded in the homologation of Italian people's thinking, even better than Fascism did''. Despite being strong affirmations, these however describe very well the occidental consumerist pro-American thinking that was about to be established in those years. There was not much place to express different ideas at that time, but as we will see, differently from what we should believe, is not that better nowadays.
In the 90's we had the Internet boom, so the world wide web was born – what following will be called as ''Web 1.0''. Essentially, computer networks could finally put people in direct contact between each other, they could exchange ideas and mostly they could reach different opinions of every kind, also discordant. They could compare them and form their own opinion. There were blogs, chat rooms and first hacking philosophies and manifestos born. There was something new, an ethics born from below, and new spaces to explore.
Unfortunately, not everything that glitters is gold. Web was difficult, expansive, hard to deal with. It forced people to keep their telephone busy, bills raised up their prices, and it was necessary to master too many technical concepts such as: browser, url, html, directory, email, provider. So, it was not for everyone. Nowadays, these terms are taken for granted, of course, but it was not the same at that time. Furthermore, life was even far more complex to those who wanted to make their contents available to more people thanks to the Internet, like sharing articles on a blog. They had to deal with web pages compositions, with hosting providers, to learn how to write on html code or to find out someone who did it for them. Very challenging. But it was so, it was something new and obviously at the beginning there were only the pioneers.
With the coming of social networks, the ''Web 2.0'' phase started. With this phase the web discovered itself dynamic. A lot of web pages, disorganized and difficult to maintain, evolved into programs running into servers, which are now able to elaborate requests and provide custom-made contents for every user. These providers are also able to collect user-generated information and publish it for them, making it available to everyone. So failed the necessity to learn a page composition language as html, to have to pay a provider ourselves, to spend too much time and sources on building a graphic or studying site navigation solutions. Sites access is now made by omnipresent search engines, message sharing are now made with a post on a social network service, the registration and necessary profiling on the portal is done free of charge and simply, with a few clicks and an email address. Anybody is now able to use and to publish on the Internet. Anyone has its own voice.
Here too, however, not all that glitters is gold. The ''nice toy'' known as freedom of speech doesn't last long, and when the old system of who administered the information before realizes that something is getting out of hand, he starts to make pressure to the big providers to regulate them. The independence from the single supplier that was seen with the advent of web1.0 is in fact lost, and users returning to rely on a few and colossal portals, mostly unknowingly accepting that they are no longer the true owners of what they publish. And then, we have the return of third parties who come back to intersperse between content creators and their public, inserting old and new dynamics that we are about to see.
Advertising
There was not a lot of publicity in web1.0. Opening a contract with an advertiser was difficult, many bloggers were independent and created their sites for passion, and not so much to make a real profit. This obviously was fine at the beginning, but soon it was realized that enthusiasm was not enough, and a way was needed to capitalize on the volume of visitors, and therefore build a sustainable business. Thus, the most important advertising circuits were born, and it was for example the area in which Google managed to establish itself above all, which through its fantastic search engine was increasingly able to provide relevant results, and ad hoc sponsored links for searches. Today this is still the case, except that the giant's advertisements have begun to come out of the search engine, they have begun to penetrate the same target sites, and to follow the navigation from site to site. If they understand that you are interested in a product, and they are able to follow you along the path of exploration, you can be sure that for weeks you will not see different sponsored objects on the margins of the pages you visit.
It is therefore clear that the real business of these operators is no longer directly related to the service offered, but is related to the advertising they are able to show you, and therefore to the purchase that they manage to push you to make. The so-called "conversion". In fact, the ultimate customer is not the user who uses the service, but the advertiser who pays for the ad space. At this point, the user becomes the product, to which it is necessary to submit more advertising spaces, and as focused as possible, in order to maximize the number of clicks received by advertisers.
There is therefore an important conflict of interest in place: the web is no longer built to meet the needs of the ultimate user, but is modeled to maximize conversion to the purchase of a product. In fact, it happens that some topics, such as current affairs such as "terrorism" or "covid", are not very popular with advertisers, who do not like being associated with contexts that trigger "negative" feelings in the minds of visitors. Consequently, it happens that these contents will not have paying advertisers and will not be profitable for the containers, the social platforms. But even more, advertisers don't even like to appear alongside content that talks about these topics, and so they will press for the containers themselves to adapt to their will, penalizing anyone who posts unwelcome material.
This is the case, for example, of YouTube, where have shown that entire topics, such as declared membership in the LGBT world, or wanting to speak openly of controversial topics, even with a critical spirit, would have led to immediate demonetization of the video or the entire channel. And demonetization is not an end in itself, but to reduce costs and to make advertisers happy, it leads to the penalization of the content on the platform through "shadow banning". In fact, shadow banning is a modern methodology for making content or information sources “disappear” from the network, without operating real “bans” that are easily contestable on the content. It is in fact operated in a silent, gradual and mostly invisible way, until it becomes evident due to the effects it entails. The user subjected to shadow ban is not notified of anything, continues to view its content as published, but in practice it no longer appears in searches, no longer appears among the suggested contents, no longer even appears at times among the feed of the sources you subscribed to. Basically, you exist, but nobody finds you, and it's as if you don't exist. It is more difficult to prove, it is more difficult to contest, but it exists and is applied.
We know that the algorithm that determines the rankings of the material shown is imperfect and constantly evolving, and it may be that what is demonetized and therefore penalized today will be not tomorrow, and vice versa, but basically it remains that the container can arbitrarily decide what is allowed and what is not on its platform. And that's okay, it can do it, but we must be aware that the service that works with advertising is not built to meet our needs, but necessarily those of the market, and obviously social platforms do not look favorably on the idea of tell us this. Furthermore, in general, the rules used by these algorithms to reward or penalize contents are almost never made transparent, but most of the time we can, at most, try to derive them by applying back-engineering to the results we obtain. The question we generally have to ask ourselves to understand which rules administer a service is: who pays?
The influence of the container
Having large and centralized containers is also bringing us back to the situation we had at the time when television and printed media dominated the scene. We are subjected to the same power of influence over information, only now the content is much better designed for us. We have the feeling that everything is free and accessible, but we are little aware of the presence of shadow banning and of the content tailored to influence our opinion. Without considering that they are tools that encourage us to create and willingly accept the imposition of a psychological framing, with content that is always "to our liking" based on the "previous views", but other big brackets would open here. The problem at the beginning is that they are new tools, much more subtle in their functioning than what previous generations have taught us to recognize, and that is why we have not yet prepared the necessary individual antibodies to detect such behaviors in the services we use.
We think that the content is free only because it comes from a multitude of creators we trust, but we ignore that if the container is not actually free, we will not be able to develop tastes and preferences that deviate from what is offered to us, of which we will be led to think that it is the only valid opinion space.
Take for example the case of Cambridge Analytica, which has caused a lot of discussion about itself. In that case, what had been evident for a long time for the insiders of the sector came to mass evidence: user data is collected and related to file personal interests and inclinations, with precision to the single individual. Then the population is categorized into sectors, determining the belonging of each. Specific conversion strategies are then developed for each category. In this way, it will then be easy to submit one campaign to each one instead of another, each time choosing the best one based on simple metrics. The campaigns will therefore be able to leverage the sensitive points of the individual selected by the preventive analysis, and the results will be collected and analyzed to improve the next campaign. In the case of the example, the scandal was evident, but these profiling operations were carried out before, they continued to be carried out during the scandal, and are still carried out. Only mostly everything happens quietly.
To have free information, it is therefore good to make sure that the container actually is first. Nothing can ever be reliable, if we know or even suspect that the intermediary may have ulterior motives, and the possibility of altering the results. In the case of advertising these are obvious, but if the influences are external, as in the case of political influences, they can be much more insidious as they are less obvious.
Fake news and Ipse dixit
Since the 2016 American elections we have accepted a new neologism: the “fake news”. Basically, "fake news" is not a new concept, "lie" could be a suitable word to describe the content in question, but a new name was needed to concretize what had to be tackled with new control tools acceptable to the masses.
In fact, "fake news" is perceived much more concretely than a term that describes an immaterial idea such as "lie". It describes an identifiable enemy, to which it is easy to give a unique identity. We can almost see a personality behind the term, which it is easy for us to point out as the evil that must be defeated, that which we must distance ourselves from, everything that is not scientific, commonly accepted, or determined by an entity "of trust". Basically, we have been accustomed to dividing information into two broad categories, "true or reliable" and "fake news". However, those who are able, and unfortunately take the task of determining what goes one way or the other, are not only us, as it should be, but as we have discussed a little while ago can be largely influenced by the container and the forces that put pressure on it.
In fact, even the big news producers noticed this opportunity very quickly, and jointly realized that a strong negative identity had to be given to all those who brought ideas different from those on which they placed their economic and political interests. Hence, big movements against "fake news" are born, embracing in some cases even the state organs themselves, which are now starting to make propaganda apparently "extra political", but which propose concepts that we can without many problems define "truth of state ". Just to follow the quote to Pasolini's thought at the beginning of the article.
Let's take an Italian case as an example. In 2017 Laura Boldrini sent an open letter to Facebook asking that measures be taken on the social network against the authors of insults and fake news. Now, as long as we talk about maintaining education on the net, it is one thing, but for this there is already the law on defamation, which if it were correctly applied would already be enough. But when it comes to fake news, the issue becomes more delicate, and oddly in the press almost passes in second place compared to wanting to eliminate insults. Almost to want to hide the request for censorship behind a much more shareable veil of appeal to personal respect.
But it is in 2020 that the government raises the bar, when AGCOM, the Italian Communications Authority, requests the removal of some content from the network and television media because they are accused of spreading fake news that are dangerous for the individual, such as the use of Vitamins C and D to counteract the infection of the Covid-19 virus. I quote (translated):
“The Communications Authority (...) has decided to initiate a sanctioning procedure (...) against the companies that publish the channels 61 DTT and 880 of the satellite platform which broadcast Adriano Panzironi's program, "Il Cerca Salute - LIFE 120 ".
Following reports and official monitoring, Agcom ascertained that Mr. Panzironi, whose transmissions have already been sanctioned by the Authority, reiterates the conduct through the dissemination of misleading and scientifically unfounded information on various types of diseases and possible treatments or ways of preventing them. Mr. Panzironi also dedicated part of the programming to "what they didn't tell you about the coronavirus", going so far as to suggest the use of vitamin C and D - products marketed by LIFE120 and advertised during the broadcasts - to prevent infection. This conduct is objectively serious due to the current health emergency and the dramatic moment for the country. The sanctioning procedure initiated provides that the Authority, deeming the conduct serious and repeated, may order against the issuer the suspension of the activity for a period of up to six months and, in the most serious cases, the revocation of the concession or of the authorization. (...)
To counter the dissemination of false or in any case incorrect information, the Authority has invited the suppliers of video sharing platforms to take every measure aimed at combating the dissemination on the network, and in particular on social media, of information relating to the coronavirus not correct or otherwise disseminated by non-scientifically accredited sources. These measures must also include effective systems for identifying and reporting offenses and their perpetrators.
Rome, March 19, 2020”
In this the state embraces an official opinion, imposing it on communication, going against article 21 of the Italian constitution on freedom of thought, speech and press, arrogating the right to challenge the sacredness of a science that is not such, since it is based on the ipse dixit of some scientists who are welcome to current power, and in fact punctually denied by science, the real one, which is able to discuss and ultimately prove the exact opposite, as for example does this research deposited among the scientific publications that title “Evidence that Vitamin D Supplementation Could Reduce Risk of Influenza and COVID-19 Infections and Deaths”.
Now, beyond the result of the research itself which would go too far into the merits, the clear point to be highlighted is that science is in continuous discussion, it lays its very foundations in free and open discussion. It is normal to deal with different and conflicting ideas, and anyone who takes up the argument "science" as a certain and mandatory thing is anything but a scientist, he is doing cheap communication for the masses using the ipse dixit (literally "he said it himself"), and it is the most dangerous thing there is.
That wasn't enough, since the Fake news topic is very much heard in state bodies, Undersecretary of State Andrea Martella, responsible for Publishing, later decides to officially establish a government "Task Force" "against Fake News". This is defined by himself as a "Unit for monitoring against the spread of fake news", therefore committed to identifying and contrasting news and sources that differ from official "state" truth. Articles of and .
Quoting Martella from the aforementioned article of the Repubblica:
"The role of the Postal Police must be strengthened to allow it to promptly identify the so-called" toxic sources "and to interrupt the chain of their diffusion on social networks"
"All public administrations will soon have to equip themselves with adequate skills and professional figures specialized in the fight against the fake news phenomenon at all levels"
"It is necessary that the Parliament, through an ad hoc law, assign effective tools to Agcom, the independent communications authority, to adequately sanction those who spread fake news"
These maneuvers, in evident constitutional conflict, are so heavy and organized in the imposition of censorship that they cannot fail to bring to mind the twenty years of fascism.
The "fake news" method is therefore a perfect and subtle weapon to control what is questionable and what is not, and acts on the perception that each of us has about the problem. If the state said "you must not publish this because we do not like it" it would be easy to accuse it of censorship, but citing a hypothetical science attributed by them as the bearer of "scientific truths", it allows to put pressure on social media, with the approval of the citizen not very accustomed to the scientific method who will think “it must be correct, science says so”. But this is not science, this is scientism, which is another thing. Science means questioning everything, it means being skeptical of everything and demanding proofs of every single thing. And this can only be achieved through the freedom of discussion on each topic, without any exclusion. Scientism, on the other hand, is to all intents and purposes a religion of faith, based on the idea that "someone will surely have thought of it for us", and which leads us to accept substantially any communication coming from official "scientists".
Freedom of opinion is now, from the postwar period onwards, more than ever endangered, and a government that allows this can neither be called democratic, as it limits the formation of free thought, and should not dare to speak of scientific method. And the large containers of web2.0 once again play a decisive role in all of this.
Towards the Web3.0
A change is not only desirable, but it is necessary in order not to see the emergence of new information totalitarianisms. Fortunately, the tools and consciences needed to change are evolving. The blockchain from 2008 to today has given a first shake to the table, and much more is about to rise on the foundations of this technology. The culture of privacy is still scarce, but slowly it is also expanding. Tools for the decentralization of data distribution are evolving, and within a few years it will probably appear normal to use them. These new tools and implementation paradigms will define the founding principles of the upcoming evolution: the Web3.0.
Let's start an analysis of possible solutions to the problem of the influence of advertising. We start from the concept that we must forget that what we find on the web is free. It is never free. If you don't pay for a service monetarily directly, it is clear that you are paying for it in some other way, or that someone is paying for it for us. In the particular case of non-profit organizations that produce open source code for the community, it is possible that the funders are benevolent users who decide to donate money to a just cause, even if this is not necessarily the case in every case. However, in the case of for-profit companies it is practically impossible for this to happen. Once you have identified how you are paying for a service, the next step is to ask yourself whether the real total cost is actually worth the service you are using.
In fact, the free web is an idea destined to remain only child of our times, and we hope it will be abandoned as soon as possible. We believe that the evolution of personal awareness in terms of privacy can be a good lever in this direction, and from the world of blockchain, where you get used to paying with transaction fees for the maintenance of the network, we are confident that an excellent culture can emerge in this sense.Distrust about the impartiality of any result having advertising in its inside, and let us realize that a free society cannot allow the economic system, represented by advertisers, to determine the evolution of the conscience and education of individuals.
End users must return to being the only real customers of the platforms, and to do this they need to understand, step by step, that in order to have a free service it is necessary to pay for it. Not necessarily that much, just enough to cover the value they would have paid individually in hidden costs, losing bits of personal freedom.
To solve the problem of advertising influences, the services must be paid for out of their own pockets by users, who in turn must decide to prefer the use of services that do not subject them to unsolicited advertising for the purpose of monetization.
The second problem, that's about the container's influence on the contents presented, can only be solved in one way: using the utmost transparency on the method of selecting content towards the end user. Here the question can be broadened a lot, and there are also quite important implications on the design of the business plans of the various companies, but a future-oriented approach should aim as much as possible to open the details of its functioning to the outside, at least in the most sensitive components. Much can also be done from a technological point of view, by encouraging the evolution and use of open and downloadable database systems, directly or through open api that allow them to query with a sufficient degree of implementation detail exposed. We must therefore demand an openness of companies in this sense as much as possible, and prefer services that guarantee a higher degree of transparency. Their companies, on the other hand, should invest in the innovation of these technologies, prefer business plans that embrace the open source mentality as much as possible, and thus ensure the absence of any alterability of the results for unknown purposes, facilitating the possibility of performing independent analyzes on the same results as thorough as possible.
Finally, the last problem on the imposition of the ipse dixit methodology through the use of the fake news lever. The only way forward is to ensure that you can always have access to a plurality of free opinion, and possibly have tools that help the individual carry out personal research, that help him increase his understanding of the domain of the problem, and that allow him to discuss those insights and ideas with the rest of the community. In order to ensure that the right to this plurality of opinion is respected, it is strongly recommended, if not necessary, to impose to the company a technical non-censurability of data. This means that the tools must be able to evolve in such a way that, even if they want, no one will actually be able to physically eliminate data permanently. Not even the company that manages the service. The data must therefore always be able to be downloaded by the user, who, if he deems it, must be free to reload them as needed. Obviously, the legal responsibility for uploading the data must be discharged by the company, which at this point will lose any responsibility for them, and must instead be placed on the shoulders of the user who will be the only real responsible for the action carried out. The data must therefore never be hidden from the service, unless there is a clear violation of a public regulation made known a priori, and in the event of a violation of the regulation it may possibly be claimed against the user in the appropriate places. Only in this way can we discharge the company from responsibility, and above all from the possibility, of deciding for us what is questionable and what is not. Everything must pass through a community screening, facilitated by powerful tools and respectful of the imposed regulations, so that everyone can develop their own personal awareness, within known limits a priori. The welfarism that we are unfortunately witnessing in this period, useful to justify the witch hunt against fake news, will only have the effect of producing populations unable to think on their own. The exact opposite action must be taken.
What emerges is that, beyond the specific technical solutions that can be used for any proposed problem, the common thread that unites each of them is to improve the degree of transparency offered. Be it economic, algorithmic or on data. This is the fundamental principle, which ultimately does nothing but shifting the responsibility for assessing the quality of the data from companies and governments, to the end user himself, and therefore globally to the community. A decentralized verification process is therefore necessary for everything else to apply, which is why decentralization of services is so important.
Obviously Etherna will try to adopt these types of principles as much as possible in the development of its solutions, and there is much more to be said about them in the future.