Lately I get asked to speak quite a bit about how to use ChatGPT, especially in the learning environment. It feels like everyone is afraid of missing the train and is urgently looking for ways to completely transform the educational process around this new piece of technology.
This feeling is very much impacted by endless prompts by the internet celebrities who say that the education industry has to embrace it, or something bad will happen to it.
This rush to embrace and immediately integrate every new piece of technology that emerges is highly disruptive for the society (not in a good way), and pushing people to rush there is irresponsible at the very least.
While ChatGPT has potential (as any other pieces of tech), so far it's been rightly called "the biggest bullshit generator that ever existed", and rightly so. It's extremely good at generating plausibly sounding answers that have n
o scientific evidence, and coming up with non-existing info sources and plain lies (just google it). This very morning I asked for some examples of data-related biases, and it came up with the story about Amazon which never happened (recognizing it in the next iteration).
Why do we rush to immediately embrace something like this as soon as it appears? Wouldn't it be wiser to see how it works first, go through a number of iterations, and roll it out step by step?
Imagine the same rushed adoption of technology were happening in healthcare. Whenever a new drug would be developed, instead of going through years of testing, FDA approval etc, it would be immediately rolled out to the entire population, doctors would be encouraged to prescribe it to patients, and coaches and tr
ainers would explain clients on how to use it to improve their health before any evidence would appear. And if you were to voice some concerns about it, you'd be named a luddite.
Wouldn't it be a mess and detrimental for public health? (You can argue that this is exactly what happened with the covid19 vaccine, but let's leave this discussion aside for the moment).
The point is that while healthcare industry does (or did) have a rigid system of checks and balances, which works/worked in most cases, the tech industry has exactly none. And it currently produces daily powerful innovations that easily disrupts any existing social processes.
Wouldn't it be a good idea to create some system that allows to test and see how the tech product performs, what limitations and guidelines usage it should have, instead of mindlessly pushing it to everyone the very moment it appears?
Would love to hear your thoughts.