16 Nov 2021 · 6 min read

Cyber Protocol Is Redefining Search Engines With Web 3.0

Disclaimer: The text below is an advertorial article that was not written by Cryptonews.com journalists.

Cyber is building an alternative to Web 2 search engines like Google and Yandex. Building a search engine on Web 3.0. Existing search engines, of which Google remains the most popular, have fundamental problems in their architecture. The concept of a search engine built on Web 3.0 is a novel concept, revolutionary even. However, to understand the importance of what Cyber Protocol is doing, it is imperative to understand how Web 2.0 search engines work and how Cyber is working to plug the issues.

How Google Works

Google is the leading search engine on the internet. 80% of search queries are done through the search engine, making it the dominant service in the space. However, there are a number of issues that arise when using the search engine. For one, indexing and how it’s carried out remains a mystery to users of the engine. There have been countless theories as to how search results are displayed when a user enters a query. 

This is a problem with the search engine as users cannot be sure of results they will get when they enter a particular query. For example, Google’s algorithm will display two search different results to two different users making the same query. It does this using data that has been collected on the users over time to present them results tailored to their growing history and then adjust these results accordingly.

Issues With Web 2.0 Search Engines

A fundamental issue with Google’s services involves link indexing. Now, the mechanism for indexing links is important as this is how content is ranked on the search engines. It determines the order with which search results will show up when a user enters a query.

It has been postulated that Google does indexing according to the amount of content relevant to the query, but there is no definitive proof of this. Instead, site with less information related to the query will sometimes show up before a site with more relevant. If this is happening, then that means that Google does not index content according to their relevance to the query. How do they do it then?

Google takes into account a user’s location, previous queries, local legislation, amongst others. With this, it tailors the search results to each user, leading to the omission of results for one user that would be shown to another user. This convoluted process makes it inefficient for the user entering the search query as they may not be shown information they need to see.

Web 2.0 search engine architecture also works with protocols like TCP/IP, DNS, URL, and HTTP/S. All of these protocols make use of addressed locations, also known as URL links. These links are shown to a user when they enter a query and clicking on these addressed locations (URL links) will take the user to a third-party website where the content they are looking for is located. This mechanism can lead to problems.

One of the problems of using these protocols is the ease of falsifying content. Any content existing on the internet can be changed, removed, or blocked at any time. In addition, using hyperlinks makes it easy for malicious actors to replace content with dangerous or harmful content. Content on the internet is vulnerable to being blocked by local authorities to further a political goal.

How Will Cyber Solve These Problems?

Cyber has built an experimental prototype Web 3.0 search engine that it is currently testing. Cyb.ai is basically a browser in a browser that allows users to surf the web. Users can search and look at content using its built-in ipfs node, index content, and interact with decentralized applications. 

A Web 3.0 search engine differs greatly from Web 2.0 in that it does away with the opaque indexing mechanism employed by Google and others. Crawler bots are not needed to collect information about changes made to content on a site because all changes will update to search results. 

Individual users wield power over the search engine results through a peer-to-peer network of participants. This works similarly to torrenting, where reliable storage is provided, content cannot be censored, and access to content can be arranged in the absence of reliable connection. In a Web 3.0 search engine, the risk of censorship or loss of privacy is completely eliminated.

Web 3.0 search engines feature a public database with open access for everyone. Whereas centralized databases like Google and Yandex feature centralized databases with limited access to the public. 

How Does Cyb.ai Rank Content?

Content ranking in a Web 3.0 search engine differs greatly from existing search engines. Content Oracle is what serves as the basis of Cyber and what it does is it provides a collaborative, dynamic, and distributed knowledge graph that is the direct result of the work of all participants in a decentralized network.

Web 3.0 search engines rank relevant content using a cyberlink as opposed to a hyperlink. To upload content to a knowledge graph, a transaction is first conducted using a cyberlink. Similar to torrenting, a user becomes a distribution point after they find a content and upload it. This works similarly to payload fields on the Ethereum network, although the data in cyber links are structured.

Cyberlink rankings are implemented via tokenomics. Users will find content via hashes stored by another user. To change a content, a user will have to change the hash. This makes it easier for content to be found without knowing the location of the server. This way, permanent links can be exchanged without breaking them

Cyber also developed a ranking algorithm called cyberRank. This works similarly to PageRank but differs in the protection that it provides users. CyberRank protects the knowledge graph from spam, cyber attacks, and selfish user behavior via an economic mechanism.

Tokenomics For Content Ranking

The entire Cyber network will be controlled using its tokenomics. Users will need to hold tokens to be able to rank content in the knowledge graph. Tokens will provide indexing and ranking capabilities, providing access to the resources of the knowledge graph. 

Tokens will allow users to index content V (volts) and rank it A (amperes). Although to do this, token H (hydrogen) will need to be held by the user for a certain period of time. H (hydrogen) is obtained by liquid staking BOOT (Bostrom) and CYB (Cyber) tokens similar to staking income on Polkadot, Solana, or Cosmos.

70% of the tokens will be distributed to Ethereum users in Genesis. Activities of users on the network will be analyzed to determine if they are eligible for the airdrop. This will put the majority of tokens in the hands of users who have already proven to create value on blockchains. 

What To Expect In A Decentralized Search Engine

Search results in a Web 3.0 search engine will not look exactly like existing engines. For one, search results will include the desired content available to be read or viewed without clicking links that lead to a third-party site.

Secondly, payment buttons for online stores can be embedded directly in search snippets. In addition to buttons to interact with applications on any blockchain included in search results.

With the launch of the Bostrom canary network, users are now able to participate in the process of bootstrapping the Superintelligence with the help of Cyb.