Dangers of AI – Why It Is Different Than Any Other Technological Reform - by Tae Won Park — by Tae Won Park
Introduction
AI is framed as the next platform shift—akin to the computer or the internet—yet its speed, scope, and nature make it fundamentally different. It touches language, images, decisions, and infrastructure at once, amplifying benefits and risks in ways prior tools did not. Based on our experience in adopting AI into our workflow and helping many other teams in integrating AI, I want to write my opinion about the upsides and risks of AI.
The Double‑Edged Sword: Upsides and Downsides Together
AI democratizes capability: like Python displaced more “ancient”, harder‑to‑use languages, modern AI lets many perform tasks once limited to experts, dramatically raising output per person. But this also fuels worker‑replacement anxiety and a reconfiguration of roles; major institutions estimate large shares of jobs will be reshaped or exposed to automation pressures [1][2].
Usability and accessibility a heavy upside: natural‑language interfaces reduce the barrier for non‑technical users offering a human-human like interaction for human-robot interactions. However, in practice it is more akin to a problem replacement instead of a solution to a problem: without training, users (especially the not tech-savy users) fall into classic AI pitfalls like over‑long chats that hallucinate more, attempts to “fix” a model, or accepting confident but incorrect outputs. Research underscores that reliability issues (e.g., hallucinations) are real and require new guardrails and human‑in‑the‑loop methods [3][7].
Quality assurance is different from past tools. Unlike a tool that exposes its steps and leave the end evaluation to the user, many AI systems present confident answers without showing their working, incentivizing under‑checking. That, combined with fewer junior roles if entry‑level tasks are automated, risks shrinking the pipeline of future experts needed to validate AI's work.
Some other aspects that I personally am not an expert in but have read/heard from other people. Energy demand from AI data centers is projected to more than double by 2030, straining grids and water resources unless matched with massive efficiency and clean power investments [4]. Social harms—from deepfakes that erode trust and enable fraud, to parasocial AI “companions”—are no longer hypothetical; law‑enforcement and regulators now issue specific alerts on AI‑enabled scams [5].
Our biggest fear
In our experience, the pattern was consistent: people who understood model behavior, prompting, and specialization achieved better results while others struggled or even underperformed themselves without AI. Training, process redesign, and clear guardrails mattered more than the tool choice itself. Without proper training the overall quality only experienced a little increase at the beginning, followed by stagnation or sometimes even decline over time. Moreover, many argue that resistance to new technology is nothing new and that humanity has always managed to adapt. While this is true, caution is still warranted. Each technological revolution has arrived faster than the last, leaving more people unable to keep pace. Even today, countless individuals still fall victim to basic scams, fake videos, and misleading posts that have existed online for decades. If society continues to struggle with such long-standing digital risks, how many more will be left behind by the far more rapid and complex development of AI? At what point does technological change outpace our ability to adapt as a society? We do not yet know the answer, but it is clear that a healthy measure of caution is not only justified—it is essential.
What Should Be Done
Pair access with education (prompt patterns, verification routines, and limits of AI) and keep humans in the loop for material decisions [3][7]. Organizations should publish usage standards, require provenance checks for critical outputs, and monitor model drift. Policymakers should speed up guardrails on transparency, deepfake labeling, safety evaluation, and workforce transition support so benefits are shared while harms are bounded [1][2][5].
Limitations of This Article
This piece is mostly a subjective opinion piece based on subjective experiences.
Summary
AI clearly boosts accessibility and productivity, and it promises meaningful gains in many different areas when used correctly [6]. But unlike prior “enabling” tools, AI's opacity, rapid development, and systemic hurdles mean that unchecked speed can leave many behind. With training, accountability, and timely guardrails, we can enjoy the upside without pretending there is no cost.
References
- 1. Kristalina Georgieva, “AI Will Transform the Global Economy. Let's Make Sure It Benefits Humanity,” IMF Blog, 2024. https://www.imf.org/en/Blogs/Articles/2024/01/14/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity
- 2. OECD, Employment Outlook 2024, 2024. https://www.oecd.org/.../oecd-employment-outlook-2024
- 3. S. Farquhar et al., “Detecting hallucinations in large language models using uncertainty estimation,” Nature, 2024. https://www.nature.com/articles/s41586-024-07421-0
- 4. International Energy Agency (IEA), “AI is set to drive surging electricity demand from data centres,” 2025. https://www.iea.org/news/ai-is-set-to-drive...
- 5. Financial Crimes Enforcement Network (FinCEN), “Alert on Fraud Schemes Involving Deepfake Technology,” U.S. Treasury, 2024. https://www.fincen.gov/.../FinCEN-Alert-DeepFakes
- 6. M. Chustecki et al., “Benefits and Risks of AI in Health Care: Narrative Review,” JMIR Medical Informatics, 2024. https://pmc.ncbi.nlm.nih.gov/articles/PMC11612599/
- 7. N. Andalibi et al., “From Prompt Engineering to Prompt Science with Humans in the Loop,” ACM Digital Library, 2025. https://dl.acm.org/doi/10.1145/3709599