Is AI making us lazy? Or crazy?

Views

Is AI making us lazy? Or crazy?



Introducing AI in Companies Means Redesigning Work, Not Just Buying Software
Analysis · AI & The Future of Work

Introducing AI in Companies Means Redesigning Work, Not Just Buying Software

The evidence shows that generative AI can produce large task-level gains — but those gains are highly uneven, can reverse in context-heavy work, and often transfer the real burden downstream into review, verification, and rework.

📅 April 2026 · 🕐 18 min read
01

Executive Summary

The central challenge of introducing AI into companies is not simply adoption; it is whether firms can redesign work, training, governance, and accountability fast enough to keep pace with the new output these tools enable. The evidence now shows that generative AI can produce large task-level gains in some settings, but the gains are highly uneven across roles and can reverse when work depends on deep context, subtle judgment, or costly verification.[1][2][3][5][6] That is why the same technology can make one team faster and another team more overloaded.

The short-term human risk is not "AI replaces thinking" — it is "AI front-loads generation and back-loads supervision." When that supervisory layer is invisible, companies mismeasure productivity and workers absorb the cost as fatigue, stress, and shallower learning.

The concern that employees may end up doing more work without deeply understanding it is well grounded. Microsoft researchers find that higher confidence in generative AI is associated with less critical thinking,[11] and their review of overreliance research argues that humans often perform worse with AI than either humans or AI alone because they accept incorrect outputs too readily.[12] Workplace research also finds that oversight-heavy AI use can intensify work and create acute cognitive fatigue rather than reducing it.[13][14]

On employment, the likely near-term effect is not a universal jobs collapse but a messy mix of task reallocation, firm reorganization, selective displacement, and new demand for complementary skills. The World Economic Forum projects 170 million roles created and 92 million displaced by 2030,[8] but real-world evidence is more mixed. The OECD finds that most GenAI-using SMEs report no net change in staffing needs,[7] while firm-level research finds AI-investing firms tend to grow sales, employment, and innovation.[16] At the same time, early evidence suggests pressure on some substitutable skills and on entry-level work,[18][19] and reporting documents a rise in AI-linked layoffs among firms under efficiency pressure.[20]

The biggest medium-term risk is unequal diffusion. The Federal Reserve Bank of New York shows that access, usage, and training are concentrated among higher-income, more educated, and full-time workers,[9] while OECD and IMF research shows adoption is concentrating in larger firms, richer regions, and advanced economies.[6][23] If companies treat AI as a simple productivity mandate instead of a managed organizational transition, the result is likely to be hidden rework, weaker apprenticeship, more burnout, and wider inequality — even where headline output rises.

02

Why AI Rollout Often Becomes a Productivity Race

The pace of diffusion matters because it changes worker incentives. An NBER survey found that by late 2024, nearly 40% of U.S. adults aged 18–64 were using generative AI, 23% of employed respondents had used it for work in the previous week, and 9% were using it every workday.[10] Gallup reported in February 2026 that 13% of U.S. employees use AI daily, 28% use it a few times a week or more, and half use it at least a few times a year.[15] In the New York Fed survey, 39.2% of workers who value AI training said one reason is that they expect there will not be many jobs in the future that do not use AI.[9]

~40%
of U.S. adults aged 18–64 using GenAI by late 2024
28%
of U.S. employees using AI a few times/week or more (Feb 2026)
39%
of workers expect few future jobs won't require AI

Taken together, these data imply a ratchet effect: once AI becomes common enough, not using it begins to feel risky for both employees and firms.

The AI Productivity-Race Loop
Competitive pressure to adopt AI Faster generation of drafts, code & analysis Higher review, verification & integration burden Training, governance & workflow redesign in place? NO Hidden rework, burnout, failures Pressure to justify ROI YES Selective automation, learning, better outcomes

When organizations benchmark success by volume alone, AI accelerates output — then transfers the bottleneck downstream into review, exception handling, and coordination.

This is the basic dynamic behind the "productivity race." When organizations benchmark success by volume alone, AI first accelerates output, then transfers the bottleneck downstream into review, exception handling, compliance, and coordination.[6][12][14] That pattern is consistent with both field evidence and recent corporate anecdotes that have dominated media coverage.

Case Study — The Code Glut

A prominent example appeared in reporting by the New York Times and Futurism:[30] a financial-services company whose coding output reportedly rose tenfold after adopting the AI coding tool Cursor, creating a backlog of one million lines of code to review, according to the security startup StackHawk. The "code glut" increased vulnerabilities and created stress across adjacent departments. The anecdote fits the broader research pattern: when output rises before verification capacity, governance, and role boundaries are redesigned, hidden work multiplies.

03

Employment: More Task Reallocation Than Instant Mass Unemployment

A useful starting point is to separate exposure from replacement. Research from OpenAI and later the Science version of the same work estimate that roughly 80% of U.S. workers are in occupations where at least 10% of tasks are exposed to large language models, and about 19% are in occupations where at least 50% of tasks are exposed.[21] But exposure measures technical applicability, not an automatic forecast of layoffs. Brookings has therefore argued that the labor-market evidence is still in its "first inning" — and its later update still finds no jobs apocalypse, at least not yet.[22]

The near-term employer picture is similarly mixed. The World Economic Forum's 2025 survey projects that job disruption will affect 22% of jobs by 2030, with 170 million roles created and 92 million displaced, for a net gain of 78 million.[8] But that is an expectation survey of employers, not an observed outcome. It is best read as a signal that firms expect substantial restructuring, not as a deterministic forecast.

At the company level, the OECD's cross-country SME survey is one of the most useful real-world snapshots because it asks firms what GenAI has already changed. Among GenAI-using SMEs, 83% reported no effect on overall staff need, 6% reported an increase, and 9% a decrease. The same survey found that GenAI is being used mainly to raise employee performance, not to eliminate roles immediately.[7]

Measured Task-Level Effects from AI Assistance Studies
+60% +40% +20% 0% −20% +15% +40% +18% +25% −19% Call-center [1] Writing speed [2] Writing quality [2] Consulting [3] OS dev speed [5]

These figures mix throughput, speed, and quality — but they illustrate the central empirical reality: AI effects are real, material, and highly context-dependent. The negative bar comes from an RCT where experienced developers slowed down with early-2025 AI tools.[1][2][3][5]

A second reason not to expect a single uniform employment effect is that AI appears to reorganize firms internally. Research by Tania Babina and coauthors finds that AI-investing firms experience higher growth in sales, employment, and market valuations, primarily through increased product innovation.[16] A related NBER paper finds that AI adoption is associated with a flatter hierarchy, with larger shares of junior staff and smaller shares of middle-management and senior roles.[17] That points toward reallocation and restructuring, not just simple substitution.

That said, there are already visible fault lines. Published early evidence finds that labor demand rose for skill clusters complementary to GenAI but fell by 20–50% for substitutable skills such as writing and translation.[18] A Stanford briefing concludes that recent data are consistent with the hypothesis that GenAI has begun to affect entry-level employment.[19] And layoffs explicitly linked to AI are emerging in the industries most exposed to automation, even if the overall labor market still does not show mass displacement.[20]

04

Wellbeing, Skill Erosion, and the Verification Burden

The concern that workers will be asked to do more while understanding less is one of the strongest findings in the emerging literature. Microsoft Research surveyed 319 knowledge workers who provided 936 real-world examples of GenAI use and found that higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking.[11] That does not mean everyone becomes unskilled. It means the locus of cognition shifts: workers increasingly judge and select from AI outputs instead of generating first-pass reasoning themselves.

BEFORE AI 🧠 Research → Reason → Draft Worker generates first-pass thinking Deep understanding built in SHIFT AFTER AI 🤖 👁️ AI drafts → Worker reviews → Accept/Fix Worker validates & integrates AI output Verification burden often invisible

The verification problem is larger than a single survey. Microsoft's "appropriate reliance" report defines overreliance as accepting incorrect AI outputs and argues that it is a barrier to productive human-AI collaboration. Importantly, the report summarizes a meta-analysis of 106 experiments and notes that people often perform worse with AI than when working alone, or worse than AI alone, because the interaction is poorly designed.[12]

Key Finding — "AI Brain Fry"

A February 2026 HBR piece argues that AI does not necessarily reduce work; in many organizations, it intensifies it.[13] A separate BCG/HBR study of 1,488 U.S. workers found that high-oversight AI use was associated with 14% more mental effort, 12% greater mental fatigue, and 19% greater information overload. The authors call this pattern "AI brain fry": not classic burnout from chronic overwork, but acute cognitive exhaustion from constant monitoring, correction, context switching, and exception handling.[14]

The software evidence is especially telling because it shows how easily workers can misperceive what AI is doing to them. In the METR randomized trial on experienced open-source developers, participants forecast that AI would make them 24% faster and, even after the study, still believed it had sped them up by 20%. In reality, AI use made them 19% slower on average.[5] Much of the lost time came from reviewing and correcting directionally plausible but imperfect AI suggestions. That is almost exactly the pathology feared by critics: the organization feels faster, while actual verification work quietly expands.

05

Productivity, Quality, and Firm Success

Despite those risks, the positive productivity evidence is too strong to dismiss. In a large field study published in the Quarterly Journal of Economics, AI assistance raised customer-support productivity by 15% on average and by about 30% for lower-skilled and less experienced workers.[1] In professional writing tasks, a Science paper found that access to ChatGPT cut completion time by 40% and improved output quality by 18%.[2] In the BCG consulting experiment, GPT-4 users completed 12.2% more tasks, 25.1% faster, and with roughly 40% higher quality on tasks inside the model's competence frontier.[3] A GitHub Copilot experiment found developers completing a coding task 55.8% faster.[4]

But the same evidence also explains why companies get disappointed after pilots. The BCG study introduced the now-famous idea of a "jagged technological frontier": AI can be very strong on some tasks and surprisingly poor on adjacent ones that look similar from the outside.[3] The METR trial reinforces this by showing that deep-repository, high-context software work can produce the opposite of the usual benchmark result.[5] Even within the customer-support study, the strongest benefits flowed to novices and lower performers, while the most skilled workers saw minimal speed gains and some quality decline.[1]

The "Jagged Technological Frontier"
AI EXCELS — Big gains possible AI STRUGGLES — Gains vanish or reverse Routine drafting Data summary Customer support Complex codebase Strategic judgment

AI capability is jagged, not smooth. Tasks that seem similar can fall on opposite sides of the frontier — which is why firm success depends on where and how AI is inserted, not simply on whether it is used.[3]

This is why firm success depends less on "using AI" than on where and how it is inserted. The OECD's 2025 review of 80-plus experiments concludes that generative AI does improve productivity, innovation, and entrepreneurship in many settings, but also stresses the role of trust, human expertise, and external validity.[6] Its SME report found that 65% of GenAI-using SMEs said it increased employee performance, 35% said it helped them scale, 29% said it helped them compete with larger firms, and 26% said it increased revenue.[7]

Why, then, are the aggregate productivity numbers still modest? Because organizational complements matter. IMF research estimates that medium-term productivity gains for Europe are likely to be modest — around 1% cumulatively over five years.[23] The OECD similarly argues that firms must adapt organization, processes, and strategy to realize productivity gains.[6] Task gains are real; system gains require redesign.

Corporate Snapshot — Snap, April 2026

Snap told investors in April 2026 that AI now generates more than 65% of its new code and used that efficiency case as part of a restructuring involving about 1,000 layoffs, expecting more than $500 million in annualized savings.[31] That may prove financially rational in the short run. But whether these decisions create durable competitive advantage remains unclear. Productivity and cost savings are not the same thing as long-term resilience, trust, or innovation capacity.

06

Inequality, Governance, Regulation, and Social Effects

The distributional story is as important as the productivity story. In the New York Fed survey, AI use ranged from 15.9% among workers earning under $50,000 to 66.3% among those earning over $200,000; workers with college degrees were more than twice as likely to have used AI at work as those without degrees; only 15.9% of workers reported that their employer offers AI training.[9] OECD research on emerging divides shows that AI champions tend to be larger, more productive firms in ICT and professional services, often clustered in already-advantaged regions.[6]

AI Usage by Income — The Access Gap
Under $50K income $50K–$200K income Over $200K income 15.9% ~45% 66.3% Source: Federal Reserve Bank of New York survey [9]

At the macro level, the same pattern appears across countries. IMF research argues that AI is likely to exacerbate cross-country income inequality by disproportionately benefiting advanced economies,[23] while the World Bank emphasizes that many low- and middle-income countries face steep challenges in connectivity, compute, context, and competency.[24]

The EU Regulatory Landscape

For firms operating in Portugal and the rest of the European Union, the regulatory picture is already concrete. The EU AI Act entered into force on 1 August 2024. Prohibited practices and AI literacy obligations have applied since 2 February 2025; obligations for providers of general-purpose AI models have applied since 2 August 2025; and the majority of rules, including the main Annex III high-risk regime, apply from 2 August 2026.[25] The Act treats many AI uses in recruitment and worker management as high-risk, and it prohibits workplace emotion recognition except for narrow medical or safety purposes.

Trust and compliance questions go beyond AI-specific law. The European Data Protection Board has warned that AI models trained on personal data cannot automatically be assumed anonymous and require case-by-case assessment.[26] France's CNIL has published practical guidance on how organizations using generative AI can comply with the GDPR.[27] In the U.S., the National Institute of Standards and Technology has released both the Generative AI Profile for the AI Risk Management Framework and a Secure Software Development Framework profile.[28] The direction of travel is clear: companies will increasingly have to prove that their AI use is trustworthy, secure, privacy-aware, and auditable.

One policy weakness is that labor impacts still receive less attention than safety, misinformation, or cybersecurity. An OECD policy brief reviewing G7 measures notes that while governments are discussing responsible use, governance, disinformation, and cyber risk, AI's impact on work remains notably sidelined.[29] If the public debate focuses only on model risk and not on workplace redesign, apprenticeship, bargaining power, and transition support, then companies will optimize for short-run output while society pays the long-run adjustment cost.

07

Risks, Mitigations, and What Companies Should Do

The evidence points to a simple conclusion: the most important AI question for firms is not "How much can we automate?" but "What mix of automation, augmentation, training, and accountability improves outcomes after rework, risk, and adaptation costs are included?"[6][12][9]

Risk Pattern Why It Emerges Main Mitigation
Output glut Faster generation creates review backlogs and hidden queues; developers may feel faster while actually slowing down.[5][30] Gate AI output with acceptance tests, peer review, and post-release defect/vulnerability tracking before it reaches production.
Automation bias Higher confidence in GenAI correlates with less critical thinking; human+AI can underperform if users accept wrong outputs.[11][12] Require source checks, explicit verification prompts, second opinions for high-risk tasks, and "design for error" interfaces.
Work intensification AI often shifts work from creation to monitoring; oversight-heavy use raises fatigue and information overload.[13][14] Use AI to remove drudgery rather than multiply dashboards and approval steps; limit tool sprawl and redesign handoffs.
Training divide Access and training are concentrated among higher-income, better-educated workers and larger firms.[9][6] Provide universal role-based training, shared access to approved tools, and support targeted at SMEs and lower-income workers.
Entry-level erosion Routine junior tasks are easiest to automate, and early evidence suggests growing pressure on entry-level work.[18][19] Preserve apprenticeship tasks, pair juniors with AI and mentors, and redesign entry roles around judgment and exception handling.
Security & privacy AI-generated code can introduce vulnerabilities, and enterprise AI use can involve personal data that is not automatically anonymous.[26][30] Use approved-tool lists, data classification rules, secure SDLC controls, privacy impact assessments, and audit logs.
HR & surveillance harms Employment AI is high-risk in the EU, and workplace emotion recognition is banned.[25] Avoid intrusive monitoring, audit HR tools for fairness and explainability, document human oversight and worker notification.
Unequal market outcomes AI leaders are concentrated in larger firms, richer regions, and advanced economies.[6][23][24] Combine competition policy, diffusion support, public training, and infrastructure investment so gains do not stay locked inside frontier firms.

For Companies: The Evidence-Based Playbook

First, deploy AI by workflow, not by slogan: task mapping should come before license rollouts because the empirical frontier is jagged.[3] Second, measure net productivity — including review time, customer escalations, compliance exceptions, defect rates, and worker fatigue — not just throughput.[5][14] Third, make AI training universal and role-specific; the New York Fed data show that demand for training is high while provision remains scarce, and EU law already makes AI literacy relevant in practice.[9][25] Fourth, protect junior development pathways by keeping some work human-first and pairing AI use with mentoring — otherwise firms may save labor today while damaging their own future leadership and capability pipeline.[19][17]

For Policymakers: Transition Quality Over Hype

The priority should be transition quality rather than abstract fear or hype. That means funding scalable AI and digital training, especially for workers currently least likely to receive it;[9] updating labor-market protections and mobility support for workers whose tasks are reallocated;[8] enforcing stricter rules for employment-related AI and intrusive worker surveillance;[25] using procurement and standards to push trustworthy deployment;[28] and explicitly bringing labor-market effects into AI governance agendas that still focus too narrowly on model safety alone.[29] The reason is simple: even if AI eventually creates as many jobs as it destroys, poor transitions can still produce years of inequality, lost wellbeing, and institutional mistrust.

08

The Decisive Variable

The likeliest long-term scenario is neither an AI utopia nor an unemployment apocalypse. It is a world in which companies that combine AI with judgment, training, verification, and worker voice become more innovative and resilient — while companies that treat AI as a speed mandate produce more volume, more hidden work, and more social friction.

The decisive variable is not whether firms adopt AI. It is whether they redesign work so that humans remain competent, accountable, and capable of understanding the systems they are now expected to use.[6][11][12]

AI AS SPEED MANDATE Volume ↑ Understanding ↓ More output More hidden rework More cognitive overload Weaker apprenticeship Wider inequality Fragile & friction-prone AI AS MANAGED TRANSITION 🎯 Judgment + Verification Targeted automation Built-in verification Universal training Protected junior paths Worker voice & accountability Resilient & innovative

References

  1. Brynjolfsson, E., Li, D. & Raymond, L. (2025). "Generative AI at Work." The Quarterly Journal of Economics, 140(2), 889–942. doi:10.1093/qje/qjae044
  2. Noy, S. & Zhang, W. (2023). "Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence." Science, 381(6654), 187–192. doi:10.1126/science.adh2586
  3. Dell'Acqua, F., McFowland, E., Mollick, E. et al. (2023/2026). "Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality." Harvard Business School Working Paper No. 24-013; forthcoming in Organization Science. doi:10.1287/orsc.2025.21838
  4. Peng, S., Kalliamvakou, E., Cihon, P. & Demirer, M. (2023). "The Impact of AI on Developer Productivity: Evidence from GitHub Copilot." arXiv:2302.06590
  5. Becker, J., Rush, N., Barnes, E. & Rein, D. (2025). "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity." METR. arXiv:2507.09089
  6. OECD (2025). Generative AI, Productivity and Labour Markets: A Review of the Evidence. OECD Artificial Intelligence Papers.
  7. OECD (2025). Generative AI and SMEs: Early Evidence on Adoption and Impact. OECD SME and Entrepreneurship Papers.
  8. World Economic Forum (2025). The Future of Jobs Report 2025. Geneva: WEF.
  9. Federal Reserve Bank of New York (2024). Survey of Consumer Expectations: AI Module. SCE Data Release.
  10. Ellingrud, K. et al. (2024). "Generative AI Adoption in the U.S." NBER Working Paper.
  11. Lee, V. et al. (2025). "The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort." Microsoft Research.
  12. Vasconcelos, H. et al. (2023). "Generation and Appropriate Reliance on AI: A Review and Meta-Analysis of Overreliance in Human-AI Decision Making." Microsoft Research.
  13. Harvard Business Review (2026). "AI Is Making Work More Intense, Not Less." HBR, February 2026.
  14. Ferrara, E. et al. / BCG & HBR (2026). "'AI Brain Fry': How AI-Intensive Work Causes Cognitive Overload." Survey of 1,488 U.S. workers.
  15. Gallup (2026). AI in the American Workplace. Gallup Workplace Report, February 2026.
  16. Babina, T. et al. (2024). "Artificial Intelligence, Firm Growth, and Product Innovation." Journal of Financial Economics.
  17. Babina, T. et al. (2024). "AI Adoption and Organizational Structure." NBER Working Paper.
  18. Elsevier-published study (2024). Early evidence on shifts in labor demand: complementary vs. substitutable skills in the GenAI era. Technological Forecasting and Social Change (specific authors per original).
  19. Stanford HAI (2025). "Generative AI and Entry-Level Employment: A Research Briefing." Stanford Institute for Human-Centered AI.
  20. Reuters (2025–2026). Reporting on AI-linked layoffs. Multiple articles, including coverage of financial services, technology, and media sectors.
  21. Eloundou, T., Manning, S., Mishkin, P. & Rock, D. (2023). "GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models." arXiv:2303.10130; published in Science, 2024.
  22. Brookings Institution (2024). "AI and the Labor Market: Still in the First Inning." Brookings Reports.
  23. International Monetary Fund (2024). World Economic Outlook, Chapter 3: AI and the Future of Work. Washington: IMF.
  24. World Bank (2024). Digital Progress and Trends Report: AI, Connectivity, and Development. Washington: World Bank Group.
  25. European Union (2024). Regulation (EU) 2024/1689 — the AI Act. Entered into force 1 August 2024; phased implementation through August 2026.
  26. European Data Protection Board (2024). Opinion on AI Models and Personal Data. EDPB guidance.
  27. CNIL (France) (2024). Practical Guide: Generative AI and GDPR Compliance. Paris: CNIL.
  28. NIST (2024). AI 600-1: Generative AI Profile for the AI Risk Management Framework; SSDF Profile for Generative AI and Dual-Use Foundation Models. Gaithersburg: NIST.
  29. OECD (2025). "AI and the Labour Market: Where Is G7 Policy?" OECD AI Policy Brief.
  30. Futurism / New York Times (2025). Reporting on the "code glut": AI-generated coding output and review backlogs, referencing Cursor and StackHawk data.
  31. Snap Inc. (2026). Q1 2026 Earnings Call and investor disclosures: 65% of new code AI-generated; restructuring of ~1,000 roles; $500M+ projected annualized savings.

Published April 2026 · Research compiled from peer-reviewed studies, institutional surveys, and regulatory documents through Q1 2026.

Comentários



Posted by:
has written 0 awesome articles for dorsal1967.

Mensagens populares deste blogue

ITRA Performance Index - Everything You Always Wanted to Know But Were Afraid to Ask

Le Grand Raid des Pyrénées

Provas Insanas - Westfield Sydney to Melbourne Ultramarathon 1983

The Ministry of Doubt

MIUT - Madeira Island Ultra Trail