When we think of centralized AI gone wrong, imaginative readers may begin to conjure images of a society ruled by artificial general intelligence-powered robot overlords.
Just as the Bitcoin movement was born out of a desire to create a permissionless financial system, decentralized AI aims to democratize access to AI technologies by removing gatekeeper chokepoints.
Decentralization, in step with the open-source programming philosophy, offloads the heavy burden of auditing and data security to a global community, which decentralization proponents believe to be an effective means to watchdog against the potential of a centralized AI Frankenstein growing too strong for centralized systems to control.
If it sounds dystopian, that’s because it is—such was the plot of Terminator; Arnold’s character was just one of a series of machines designed by Skynet, a fictional artificial intelligence created by a centralized organization.
While decentralized AI isn’t a guaranteed insurance policy against dystopia, it hints at a necessary alternative construction to the nascent centralized systems, the true risks we may not realize until it’s too late.
Why Do We Need Decentralized AI in the First Place?
Machine learning improves with the quantity of data available, and consumer-friendly products like ChatGPT, Claude, and Perplexity.AI will continue to improve as they grow in popularity.
Providing models with training data, Users are often oblivious to the value they generate for private companies just by using them.
AI’s potential to pull the economic rug from virtually any career path and the out-of-whack monetary incentives for users are parts of an entirely different rabbit hole we’ll explore in a future article.
Perhaps the greatest risk is what happens if these highly trained models end up in the wrong hands.
OpenAI, for example, has been under intense scrutiny over recent moves.
Apple and OpenAI announced a partnership on June 10th, 2024, that would bring ChatGPT into Apple’s suite of products, including iOS, MacOS, and iPadOS, a collaboration that’ll likely lead to a mindblowing customer experience far exceeding any recent iOS update.
The other hand, however, is troublesome.
Although the press release states, “requests are not stored by OpenAI, and users’ IP addresses are obscured. Users can also choose to connect their ChatGPT account, which means their data preferences will apply under ChatGPT’s policies.”
Still, OpenAI’s models would be trained on incredibly intimate personal behavior and information, which could be used for whatever purpose.
pic.twitter.com/7OgZAAdPf6
— Elon Musk (@elonmusk) June 10, 2024
Ok, CoinCentral, let’s take off the tinfoil hats for a moment. This doesn’t seem much more invasive than Siri or Alexa; besides, what could a company like OpenAI do with this data?
We’re not in the business of fear-mongering or speculation, but the juxtaposition with this next data point is concerning.
Just three days after the Apple news, OpenAI announced its appointment of former NSA Director (2018 to 2023) and Retired U.S. Army General Paul M. Nakasone to its Board of Directors.
While we shouldn’t be dismissive of the NSA’s counter-terrorism and cyber defense efforts, it’s still a government apparatus notorious for spying on its citizens, popularized by NSA whistleblower Edward Snowden, who also criticized the appointment.
They’ve gone full mask-off: ???????? ???????????? ???????????????? trust @OpenAI or its products (ChatGPT etc). There is only one reason for appointing an @NSAGov Director to your board. This is a willful, calculated betrayal of the rights of every person on Earth. You have been warned. https://t.co/bzHcOYvtko
— Edward Snowden (@Snowden) June 14, 2024
We don’t mean to paint OpenAI, Nakasone, or the NSA with ominous and villainous overtones, but the reality is that, through relatively straightforward and flashy integrations, people may walk backward into a surveillance state they wouldn’t be inclined to enter through the front door.
Knowledge is power, but not in the way your stereotypical classroom motivational poster claims. As useful and constructive a tool like generative AI is, it can also seem to have an enormous downside risk in the wrong hands that must be actively protected against.
Even a glass-half-full approach should acknowledge that while companies like OpenAI aren’t operating with a malevolence bent on world domination, safeguards should still be in place to prevent unintended consequences.
Let’s take a breather from the dramatic flair of this section for a moment by looking at the decentralized AI’s structural advantages.
Centralized AI vs. Decentralized AI: On Technology
It’s important to frame decentralized AI not solely as a strategic safeguard against centralized AI but as potentially a better system overall.
Let’s explore.
Cost Advantages of DeAI
Scaling a centralized AI system is often compared to building a skyscraper on a single foundation; the higher you go, the more strain you place on that foundation.
Centralized AI demands substantial computational resources, including powerful servers and large-scale data centers. As the appetite for AI grows, these systems face challenges in scaling efficiently and cost-effectively.
ChatGPT, for example, was estimated to cost roughly $700,000 per day in 2023, a figure that has likely increased due to OpenAI’s advanced models—and that’s not counting any of the audio or generative video, as seen with Sora.
Anthropic, creator of the Claude model, has raised over $7.6 billion to compete with OpenAI.
Decentralized AI systems utilize some variation of a network of globally distributed providers of computing power and data storage. Instead of going through AWS or private data centers, a DeAI entrepreneur could tap into a marketplace of 24/7/365 competitively priced compute and data storage resources.
A decentralized AI network could tap into various devices, from high-end servers to personal laptops and even smartphones. For example, by connecting your MacBook to a decentralized AI network, you can contribute its computational power to the collective pool, for which you’d be compensated, likely in the project’s native token.
Editor’s note: We use the term “decentralized AI” or “DeAI” here, but this model has existed since decentralized computing projects such as Golem. Today, the category of “Decentralized Physical Infrastructure Network, or DePIN, largely overlaps with the creation and operation of compute marketplaces.
This distributed approach creates a marketplace for idle (or active) compute resources, aligning proper business incentives for aspiring DeAI entrepreneurs and cash flow seekers alike.
Centralized AI vs Decentralized AI: Efficacy, Data Security, and Privacy
By definition, centralized systems tend to have single central points of failure, making them susceptible to attacks and technical failures. A compromised central server can disrupt the entire AI system, leading to downtime and data loss.
Famously, a distributed network made possible by blockchain should, theoretically, never experience downtime.
Downtime is just the cost of doing business– most popular centralized businesses, such as OpenAI and Meta, have their own downtime trackers.
Blockchain, which underpins most DeAI systems, provides a secure and immutable ledger of transactions and interactions within the network. This makes it nearly impossible for malicious actors to alter or corrupt the system without detection.
A large enough network should be fine even if some nodes go offline or are compromised.
It’s this distributed nature that inherently provides continuous uptime, ensuring that AI services remain available and providing a level of reliability that centralized systems struggle to match.
And then there’s the 21st-century digital bogeyman– data privacy.
Imagine a vault containing the most valuable jewels in the world, all stored in one place– enough to make a goblin salivate.
Centralized AI systems store vast amounts of data in central servers, making them a honeypot for cyberattacks. A single breach could expose sensitive information, causing irreparable harm to individuals and organizations alike.
However, these data breaches aren’t necessarily the same thing as your Reddit password being exposed. The specter of surveillance looms large, as entities with access to centralized data can misuse it, eroding public trust and raising ethical concerns about privacy and consent.
It’s not a matter of a centralized company willingly using your data for evil but of the potential for them to neglect to secure it, as has been seen time and time again.
Artificial intelligence innovation is moving at light speed, evidenced by OpenAI’s breakout ChatGPT app, which notched an estimated 100 million monthly users in just two months upon launch, making it the fastest-growing consumer app of all time– a milestone that took runner-up Facebook four and a half years.
For the average user, what’s not to love: ChatGPT is regarded as very accessible with a free version, premium costs of $20 per month, and reasonable API requests for developers.
However, centralized AI is like a gated community, where only a select few have access to the resources needed for development.
It goes beyond API accessibility– the average startup doesn’t have Google or Microsoft money to spend $700,000 daily on operational expenses.
Many decentralized AI startups focus on providing developers with a marketplace where people from all over the world provide compute resources and access to servers at reasonable prices.
AI & Data Transparency
Nearly all centralized models operate behind a veil.
The data in this “black box” of sorts used to train these systems is collected from limited, largely unknown sources, introducing biases that can lead to inaccurate, unfair, and discriminatory outcomes.
Granted, this threat level seems negligible if something like ChatGPT’s chicken casserole recipe calls for using egg whites instead of egg yolks. Still, as society grows more dependent on AI’s efficiencies, biases and inaccuracies can compound into enormous issues.
Without transparency in their decision-making processes, it becomes challenging to identify and correct these biases, undermining the ethical integrity of AI applications.
Centralized data sets can’t be reliably audited by third parties, whereas it seems to be common practice by decentralized AI companies to have the entire model, including its training data, open source.
Some DeAI startups are even working on marketplaces where users can buy and sell their data or lease individual models specifically trained on a narrow task with exceptionally high-fidelity data.
These “personal AI experts” will likely have enormous commercial demand, especially if they can reduce the likelihood of error for an incredibly useful and scalable system prototyped by ChatGPT.
But what about licensing? What if we know that a model is being trained from data from the New York Times, Reddit, or our favorite site, CoinCentral?
Here’s the thing with licensing: it’s expensive but also creates another regulatory chokepoint where a rulemaker can determine which data should or shouldn’t be permissible to be licensed.
For starters, licensing costs may dissuade budding AI startups from building, if they can even obtain permission to license the data.
However, the notion of using licenses to restrict and control AI startups isn’t just Libertarian propaganda; it’s a courtroom drama evolving as we write this article.
An August 2023 proposal by anti-crypto Senators Lindsay Graham and Elizabeth Warren advocated for creating a new federal agency, structured as a commission, (full text here) to “regulate digital platforms, including with respect to competition, transparency, privacy, and national security.”
The proposal would require “dominant platforms” (as defined by the commission) to get a license that meets the designation. Specifically, the commission can also revoke licenses if the platform has “engaged in repeated, egregious, and illegal misconduct” and hasn’t undertaken measures to address the misconduct.
The White House-issued AI Bill of Rights has a similar tone of regulatory control in order to protect.
Our discussion here isn’t whether AI innovation should be regulated or left completely lasses-fair as there are extensive arguments for either side, but this model does little to address data transparency.
It actually further muddles it, enabling a regulatory body to exert control to omit or add new information with the threat or reward of a permit.
Many decentralized AI companies offer marketplaces for models, where third parties can actively audit, confirm, and even contribute to the training data for each model.
Challenges for Decentralized AI
Throwing the blockchain on something isn’t necessarily a panacea, and DeAI is fraught with challenges and shortcomings.
If centralized AI is in its nascent baby deer-legged stages, DeAI has yet to leave the womb.
Few models work, let alone functional, robust marketplaces capable of rivaling the models made by OpenAI, Anthropic, Perplexity, Google, or most of the stuff shared on AI model directory, Hugging Face.
One of DeAI’s premier challenges is its reliance on diverse and distributed data sources. This can lead to discrepancies in data quality, making it difficult to maintain the accuracy and reliability of AI models.
As such, data integrity and standardization across a network is a critical challenge.
While DeAI aims to enhance security and trust, it introduces new vulnerabilities; its distributed nature makes it susceptible to various attacks, such as data poisoning or Sybil attacks, where malicious actors can disrupt the system by flooding it with fake nodes.
And then, there’s product-market fit for each marketplace.
However, don’t let the novelty of DeAI dissuade you from learning about it. If the velocity of innovation of its centralized counterpart is any indication, it will move very quickly.
Final Thoughts: So, Why Decentralized AI
Leaving the practical advantages described above aside, the spirit of decentralization is an extension of the rebellious demand for self-sovereignty and freedom from central control, as seen with Bitcoin.
Understandably so, the crypto OGs may get flashbacks to the ICO craze of 2017 upon hearing of the combination of two trending technologies “(cryptocurrency and AI), but we implore our readers to consider the severity of what’s at stake.
At brass tacks, the DeAI niche is about marketplaces– people who buy and sell compute and data storage resources, people who buy and sell access to AI models, people who participate in the economics of the native tokens via staking or liquidity provision, etc.
The more people who enter the DeAI space, driven by a passion for self-sovereignty or just general curiosity, will indirectly contribute to upholding privacy, security, and community-driven innovation as the standard for technology and not just fringe features.
Never Miss Another Opportunity! Get hand selected news & info from our Crypto Experts so you can make educated, informed decisions that directly affect your crypto profits. Subscribe to CoinCentral free newsletter now.