Web3: A revolution or a marketing buzzword?
The original iteration of the World Wide Web, Web 1.0, which is generally recognised to have stretched from 1990 to 2004, seems quaint now. It was a static version of the web, where users consumed read-only content. The barriers to entry were high: one needed a desktop computer and expensive online access to surf the web. One of its defining features was hyperlinking, allowing users to jump from one page to another.
Importantly, Web 1.0 was decentralised; essentially, it “belonged” to everyone. But Web 2.0, also called the social web or interactive web, which prevailed from 2004 to 2014 (and beyond), would change everything.
For starters, the barriers to entry for users were significantly lowered. One could use a smartphone or other device to access the web, and both read it and write to it – in other words, consume and create content. Social media really came into its own in this period, connecting people all over the world.
But big tech companies – the likes of Meta (then still known as Facebook), Amazon, ByteDance, Alphabet (before 2015, Google), Advance, Microsoft and others – also began to dominate and consolidate the web, centralising it. They now control an exceptional amount of content on the internet. A good example is Meta, which owns Facebook, Instagram and WhatsApp, three of the most powerful social networks on the planet.
Most significantly, users have become the “product”, providing the data and content that these networks own; the problem is, amazingly, the social media networks are beginning to run out of people on the planet to join their platforms.
It’s also notable that Web 2.0 did not entirely replace Web 1.0. Rather, they coexist and complement each other. A streaming service such as Netflix is a clear example of a thriving 1.0-type offering, simply providing users with content for a fee.
But Web 2.0 also brought with it major concerns, including surveillance (so sensationally exposed in 2013 by former United States National Security Agency contractor Edward Snowden), the monetisation of user attention (in other words, advertising), polarised online discourse, censorship and who owns data.
Web 2.0 lives on, but Web3 – a term coined by British computer scientist Gavin Wood (web inventor Tim Berners-Lee’s earlier prediction of a “semantic web” is sometimes also called Web 3.0) – has been with us since 2014.
Essentially, Web3 has several important features that contrast with Web 2.0: for starters, it puts ownership of data back into the hands of users, not Big Tech; and through the use of blockchain technologies it strives to once again decentralise the internet, by eliminating centralised authorities that can control who accesses data.
Web3 is also trustless and permissionless, where data exchange and transactions are transparent, and cannot be changed or obstructed by anyone. It features connectivity and ubiquity – the internet of things, where the web is everywhere and everything is connected, without hardware or software limitations.
Finally, Web3 is a semantic web, a concept Berners-Lee described as “a web of data that can be processed directly and indirectly by machines”. Essentially, through advancements in machine learning (ML) and natural language recognition (NLR), computers will be able to understand and interpret context, emotion and nuances of language, and thus connect data in the way that humans can.
An example of Web3 is Mastodon, open-source software that allows users to create decentralised social networking services. It uses blockchain technology, it has no governing authority, and users decide on important aspects such as codes of conduct, privacy and content moderation.
Web3’s supporters claim that it holds great advantages, in terms of addressing the over-centralisation of the internet by Big Tech, as well as improved data security, scalability and privacy over Web 2.0.
But it has many detractors, too: new Twitter owner Elon Musk has said it “seems more marketing buzzword than reality right now”, for example. Twitter founder and former CEO Jack Dorsey describes it as a “venture capitalists’ plaything”, arguing that it will simply centralise the web in the hands of venture capital funds instead of Big Tech.
Web3 does have its drawbacks. The sheer energy consumption of blockchain computations is a big concern. Decentralised services are harder to regulate than centralised ones. (Most are not as decentralised as they claim, too, because they often use centralised services to access blockchains.) And many implementations are still niche ones, primarily around cryptocurrency.
So, is Web3 a revolution, and how does it affect us?
I believe that we will continue to see different integrations of Web3 concepts into the internet, such as self-sovereign identity, where users control the information they use to prove their identity across platforms; blockchain and non-fungible tokens for licensing and ownership declarations; cryptocurrencies for transactions; and ML and NLR for linking data and chatbots.
But a revolution? I don’t think so. Like Web 2.0 is to Web 1.0, supplementing instead of supplanting it, Web3 won’t completely change the internet as we know it. It will be more evolution than revolution, then – and we will evolve with it.