Is decentralising Artificial Intelligence the way forward?


As the internet evolves towards its third generation, the conversation has centred around the importance of an open internet — one that is decentralised and with a completely new foundation based on new technologies. It was inevitable that an intersection with artificial intelligence (AI) would happen and particularly since generative AI has evolved rapidly.

PREMIUM
(FILES) This illustration picture shows icons of Google’s AI (Artificial Intelligence) app BardAI (or ChatBot) (C-L),(AFP)

As AI developers embrace simpler, open-source models, what is the shape that the decentralised AI dream will take? Varun Mathur, co-founder and CEO of HyperspaceAI detailed this in his post on X earlier this week. He called it a “BitTorrent-like network for AI”.

He isn’t the first to talk about this.

The Opentensor Foundation, a not-for-profit organisation led by former Google AI engineer Jacob Steeves, for instance, is also developing open and accessible AI tech, primarily focusing on a network of user-provided computing power. Think of these nodes as how your computer became a part of a torrent network (we do not condone illegal torrent downloads; legal torrents work the same way) to not just download a file but also assist with uploading parts of it for other users to download — the larger the file, a greater number of users uploading would help improve the download speed and time for anyone downloading.

And this is what decentralised AI is really: As Mathur’s outline suggests, Hyperspace’s idea is to provide “community AI pools”, which will be made up of computing nodes. Without using complex terminology — inevitable in decentralised network conversations — think of a node that joins (this can be any compatible device capable of contributing to an AI network’s performance as it responds to queries) as additional computing power. Circle back to the torrent network conversation.

“We’re looking into defining a protocol that relies on global computational collaboration, which means that we need to give valid incentive to all the participants of the network to share and exchange their computation power,” the HyperspaceAI outline states. Much like how BitTorrent works, users on decentralised AI can bring their own computing devices which are likely to be powerful enough to run high-quality models. The network is, therefore, distributed.

“By leveraging digital incentives and distributed computing, the Bittensor network is able to connect and coordinate a global supply of computing power and human ingenuity,” is how the Opentensor Foundation envisions the future of AI. The Bittensor network is designed to allow individuals to collaborate in building the network with knowledge for training data sets and compute power for AI calculations to run with.

With a federated or centralised setup, the owning company has complete control over the development of the AI model and datasets used. In contrast, a decentralised AI system doesn’t have a single, central entity controlling development and training.

It looks good on paper, but we’ll know in due course how this evolves.

The AI we use, every day

There are two important ways to understand this decentralised network: One, as an AI tool as it is developed, and two, its subsequent evolution using training data sets. For illustration, consider OpenAI’s GPT generative AI. From GPT-3.5 to GPT-4, it was a significant leap in terms of its knowledge base, context understanding and conversational skills. Similar examples are Google’s Bard, Amazon’s Alexa and Adobe’s Firefly generative AI tools.

Thus far, we have seen the advantages of a closed ecosystem to develop AI tools; they remain unmatched, as yet, from the results of decentralised networks. This trajectory of development will continue till a powerful decentralised system comes along that will allow extensive collaboration for improving the quality of models, data sets for training and shareable to development feedback such as failsafe measures and bias elimination. That’s what Opentensor Foundation and Mathur’s Hyperspace AI are attempting to change, in due course.

A few things to watch out for, of course, would include incentives for companies to use open-source models instead of proprietary ones, which they might develop faster. There will need to be some justification for compromising feature or capability advantage versus additional cost for using a proprietary data set instead. Secondly, the quality of open-source data sets will be crucial. For example, if an AI model is being trained on a data set about a particular illness and its treatment, there will need to be equal information about female, male and intersex patients — if that composition is skewed, the AI’s results will be biased.

Specific data set partnerships that can allow anyone to build or train an AI tool will save on cost, time, and duplicity of effort. OpenAI, Google, Meta and others are walled gardens when building AI tools to compete with everyone else, it isn’t the complete picture.

OpenAI for instance, is working with partners as yet unnamed, to create an open-source dataset for training language models. “This dataset would be public for anyone to use in AI model training. We would also explore using it to safely train additional open-source models ourselves. We believe open source plays an important role in the ecosystem,” they said in a statement earlier this month.

As far back as in 2021, Meta (then Facebook) had released the Fully Sharded Data Parallel (FSDP) open-source tool. This training algorithm made it more efficient to train AI models whilst using less hardware.

Decentralisation as an idea, from web3 or third generation internet

We are, at present, living in the second generation of the world wide web, the internet. The incoming third generation (it’ll be a slow shift, with no hard timelines) is web3, a term coined by computer scientist Gavin Wood in 2014. Cryptocurrencies, decentralised ledgers, non-fungible tokens (NFTs) and digital (and uniquely tokenised) versions of even real things such as tickets to a concert or a piece of sporting memorabilia are all horizons waiting to be unlocked.

That’s where the key element is. The idea is to make the underlying tech available to everyone and to make open standards a default rather than the current norm of walled gardens set by tech giants such as Google and Meta.

For instance, a transaction or new data added to a digital ledger will be updated and reflected at multiple places (complicated terminology is nodes) where other such ledgers exist. The advantage – there will be little risk of a single point of failure. Theoretically, it sounds great. How this pans out in reality and at scale, is anyone’s guess for now.

Leave a Comment