[ad_1]
Opinions expressed by Entrepreneur contributors are their own.
We’re on the cusp of a technological revolution not seen since the dotcom boom of the ’90s. Microsoft and Google are racing to launch competing products based on the tech driving it. All that’s left is for smaller startups to rebrand themselves to ride the hype and boom! We’ve landed ourselves in a bubble.
Everyone who was around during the NFT golden era of 2021 knows exactly where I’m going with this. The hype surrounding OpenAI’s generative AI chatbot, ChatGPT, is giving us all a dose of deja vu. Luckily, there are key differences between the AI paradigm shift we’re currently experiencing and the NFT bubble from a year and a half ago.
It’s crucial to separate fact from fiction and ensure AI innovators seize on this moment to push the boundaries of the technology efficiently and ethically.
Related: What Is ChatGPT? Google, Siri and Even ChatGPT Are Confused About Its Existence
The technology itself
While we can draw lessons from the NFT boom of 2021, from a strictly technological standpoint, ChatGPT simply blows the Ethereum wallet on which you store NFT jpegs out of the water.
We’re talking about a complex Language Learning Model (LLM) that digests massive quantities of text data and infers relationships between words within the text. Essentially, LLMs fill in the blank with the most statistically probable word given the surrounding context — and ChatGPT is doing this on a scale never seen before to write poems, movies and essays.
Conversely, NFTs are stored on blockchain-based wallets to represent digital ownership over a particular asset — whether digital or physical. This could be a painting, a car or a meme. So the “NFT technology” we’re talking about is really just code for “blockchain.”
That’s not to downplay the potential of blockchain, and particularly NFTs, to solve the digital ownership problem. For example, a world in which musicians regain the ability to own and sell their music online sounds promising for creators who have drawn the shorter stick in the democratization of information spurred by the internet. It does mean, however, that its potential to radically transform industries was massively exaggerated by many of the companies selling themselves as “Metaverse” and “NFT” platforms. And it’s certainly limited when compared with the potential of AI.
After years of determination, blockchain enthusiasts are still trying to find a use case that will spark mass adoption. Sure, some average people invest in bitcoin and bought NFTs in 2021. But compare that to the number of offices that started using ChatGPT days after its launch, and we have a clear winner.
Related: Does AI Deserve All the Hype? Here’s How You Can Actually Use AI in Your Business
The challenges ahead
It’s a lot harder to convincingly “fake” being an AI company. The blockchain industry is so intentionally confusing that companies in 2021 were trying to pass off digital art that wasn’t even blockchain-based as “NFTs,” and standard Play-to-Earn (P2E) games were adding “Metaverse” to their messaging.
That simply won’t be a problem for AI. Instead, the AI industry has more serious challenges with which to contend. Companies across virtually every industry will integrate and build on top of ChatGPT and other successful generative AI tools, finding new and interesting use cases for them.
For that to happen, AI innovators will have to spot ChatGPT’s flaws and leverage its strengths. Dr. Michal Tzuchman-Katz, Co-Founder and Chief Medical Officer at Kahun Medical, points to the improvements an AI model like ChatGPT would need to make a dent in healthcare and better serve doctors. The company built an AI tool that “thinks like a doctor” and offers doctors clinical intake before patient visits.
While ChatGPT might be able to make textual interaction with patients smoother, it can’t think clinically like Kahun, which consults with its own database of peer-reviewed medical literature to produce responses and traces back to its originating sources.
ChatGPT, on the other hand, produces answers based on comparing the user’s input with the input of thousands of others and isn’t as transparent regarding its sources. That’s a problem for other industries, too. There’s talk about students using ChatGPT to write essays and answer homework questions. But professional journalists and authors won’t be able to utilize the model beyond ideation and outline building if it can’t cite its sources thoroughly enough.
And then there’s the bias problem. Conservative commentators have reveled in tweeting about examples of ChatGPT showing an obvious left-leaning bias. AI more broadly is also riddled with racial bias. Finding a solution to this will be one of the biggest challenges AI innovators face in expanding the technology’s use.
As far as accuracy, we can, of course, expect ChatGPT to improve quite rapidly. The goal going forward for AI innovators is to take part in its expansion and improve upon it. Adding a transparency layer and tackling the bias problem will be key to ensuring it becomes more ethical and practical overall.
[ad_2]
Source link