Resisting AI and the Future of Work
Navigating Technological Transformation in the Modern Workplace
Introduction
A professional grappling with an AI interface represents one of the most significant workplace transformations of our era. As artificial intelligence becomes increasingly prevalent across industries, employees are challenged to adapt to technologies that promise both enhanced productivity and existential threats to traditional employment structures. Many workers harbour deep concerns that AI systems are being implemented not merely to assist them, but to systematically replace them using their own expertise and institutional knowledge to facilitate this transition.
This resistance echoes historical patterns of technological upheaval, from the Industrial Revolution's mechanisation to the digital transformation of the late twentieth century, where people consistently met new technologies with a complex mixture of scepticism, fear, and opposition (Acemoglu & Johnson, 2024). The current AI revolution, however, presents unprecedented challenges that distinguish it from previous technological transitions. Unlike past innovations that primarily affected manual or routine cognitive work, generative AI demonstrates capabilities in areas once considered uniquely human domains, such as complex reasoning (Eloundou et al., 2024).
This comprehensive analysis explores how resistance to AI manifests in contemporary workplaces, examines AI's potential to fundamentally restructure traditional organisational hierarchies and career advancement pathways, and investigates the differential impacts across skill levels and socioeconomic groups. By synthesising recent empirical research with historical technological transitions, we aim to provide a nuanced understanding of the profound changes ahead and their implications for workers, organisations, and society.
Historical Context: Lessons from Past Technological Revolutions
Understanding contemporary AI resistance requires examining historical patterns of technological adoption and worker responses. Research into industrial transformations reveals that technological change consistently presents a double-edged dynamic -simultaneously boosting productivity and creating wealth while disrupting employment and provoking social backlash (Mokyr, 2016; Frey, 2019; Autor, 2018).
The early nineteenth-century Luddite movement provides particularly relevant insights for understanding current AI resistance. English textile workers, now famously known as Luddites, systematically destroyed knitting machines and other automated equipment between 1811 and 1816, not out of blanket opposition to technology itself, but as a targeted response to employers who used mechanisation to undercut wages, bypass skilled workers, and deteriorate working conditions (Thompson, 1963). Modern historical analysis reveals that the Luddites were fundamentally protesting the social arrangements surrounding technology implementation rather than the machines themselves, seeking fair labour practices and protection of skilled craftwork traditions (Sale, 1995).
This pattern of skill-biased technological change continued throughout the twentieth century, where innovations consistently eliminated routine, repetitive tasks whilst augmenting higher-skill roles requiring complex judgement, creativity, and interpersonal interaction (Autor et al., 2003). The personal computer revolution exemplifies this dynamic, eliminating many clerical positions such as typists and filing clerks whilst simultaneously creating demand for information technology specialists, knowledge workers, and professionals who could leverage digital tools to enhance their productivity (Brynjolfsson & McAfee, 2014).
However, the current AI transformation represents a qualitative departure from previous technological waves. Recent empirical research demonstrates that large language models and other generative AI systems can perform tasks previously thought to require uniquely human capabilities, including creative writing and complex analysis (Eloundou et al., 2024).
A landmark study published in Science found that AI exposure is now concentrated among higher-wage, higher-skill occupations, inverting the traditional pattern where technology primarily affected lower-skill work (Eloundou et al., 2024). This represents a fundamental shift in the nature of technological displacement, suggesting that even elite professions such as law, medicine, and finance may face significant AI-driven transformation.
While some systems may simulate cognitive empathy - recognizing emotional cues and responding accordingly - they lack the biologically grounded emotional empathy that underpins genuine emotional intelligence (Gill, 2024).
Research by Acemoglu and Johnson (2024) provides crucial theoretical framework for understanding this transition by drawing parallels between contemporary AI development and the machinery introduction during the early Industrial Revolution. Their analysis suggests that the key determinant of technological impact on workers is not the capability of the technology itself, but rather the social and economic institutions that govern its implementation and the distribution of its benefits.
Contemporary Workplace Resistance to AI Implementation
Recent global surveys confirm that workforce-related challenges remain a material barrier to AI scale-up. Deloitte’s 2024 State of Generative AI in the Enterprise study reports that while 79% of business leaders expect AI to transform their organisations within three years, only 25% feel highly prepared to manage governance and human-change risks, and just 47% believe their employees are adequately educated on the technology’s benefits (Deloitte AI Institute, 2024).
A 2025 enterprise survey by AI vendor Writer and research firm Workplace Intelligence - covering 1600 U.S. knowledge workers - found that 31% of employees admit they are “sabotaging their company’s generative-AI strategy”, and the share rises to 41% among Millennials and Gen Z (Writer & Workplace Intelligence, 2025).
Mechanisms and Manifestations of Resistance
The psychological mechanisms underlying this resistance are complex and multifaceted. Foundational research on technology adoption reveals that employee resistance often stems from perceived usefulness, social influence, and organizational support (Venkatesh et al., 2003). These concerns become particularly acute with AI implementation, where emerging studies highlight threats to job security, professional identity, and workplace autonomy due to the potential for core job functions to be replaced entirely (Writer & Workplace Intelligence, 2025; Gill, 2024).
Beyond technology-acceptance factors, three psychological mechanisms clarify why AI evokes unusually strong resistance. First, the Job Demands–Resources (JD-R) model predicts strain when new techno-demands (e.g., constant prompt engineering) outpace the resources offered (training, autonomy), provoking withdrawal or counter-productive behaviour (Bakker & Demerouti, 2017). Second, Self-Determination Theory holds that workers comply more readily when AI implementations satisfy their needs for autonomy, competence and relatedness; when these needs are thwarted, defensive resistance rises (Deci & Ryan, 2000). Finally, loss-aversion framing (Kahneman & Tversky, 1979) means employees overweight potential job loss relative to promised efficiency gains, amplifying the perceived threat of AI. Integrating these lenses moves the conversation from surface attitudes to the motivational bedrock of resistance.
Initial resistance was predominantly driven by fears of direct job displacement, with workers expressing concerns about "training their own replacement" (Lund et al., 2021). However, recent organisational behaviour research indicates that resistance motivations have evolved beyond simple job security concerns. Contemporary resistance increasingly stems from practical disillusionment with AI tool performance, lack of transparent communication about implementation plans, and broader workplace relationship deterioration following significant organisational changes (Kotter & Schlesinger, 2008).
A systematic review of workplace technology adoption studies reveals that successful AI implementation requires addressing fundamental psychological safety concerns and ensuring participatory change management processes (Soulami et al., 2024). Organisations that fail to address these human factors consistently experience higher resistance rates and lower implementation success.
Organisational and Cultural Factors
The context of AI implementation significantly influences employee acceptance or resistance. Research demonstrates that organisations characterised by high psychological safety, transparent communication, and participatory implementation processes experience substantially lower resistance rates compared to those imposing AI adoption through top-down mandates (Edmondson, 1999).
Companies that involve employees in AI tool selection, provide comprehensive training programmes, and clearly articulate how AI will augment rather than replace human work achieve better adoption outcomes. This is further supported by Venkatesh et al. (2003), who highlight that employee resistance often stems from perceived usefulness, social influence, and organizational support.
Additionally, Bakker & Demerouti (2017) emphasize the importance of providing adequate training and support to employees during AI implementation to prevent strain and counter-productive behaviour. Deci & Ryan (2000) also suggest that workers comply more readily when AI implementations satisfy their needs for autonomy, competence, and relatedness.
Labour union responses provide additional insight into organised resistance strategies. The 2023 Hollywood writers' and actors' strikes represented the first major collective bargaining actions focused primarily on AI issues, establishing important precedents for worker protections in AI implementation (Writers Guild of America, 2023). The Writers Guild successfully negotiated contract language guaranteeing that AI cannot replace writers, requiring human authorship credit regardless of AI assistance, and establishing compensation protections even when AI tools are used in content development processes.
These collective action successes demonstrate that worker resistance can achieve tangible protections and influence the trajectory of AI implementation. The Hollywood strikes served as a template for other industries, showing that organised labour can successfully negotiate boundaries around AI use and ensure that technological benefits are shared rather than concentrated solely among employers and technology vendors.
AI and Organisational Hierarchy Transformation
One of the most profound implications of widespread AI adoption concerns its potential to fundamentally restructure traditional organisational hierarchies and career advancement pathways. For decades, most industries have operated on pyramid-shaped organisational structures with clearly defined advancement tracks from entry-level positions through multiple management layers to executive leadership. This hierarchical model has provided the primary mechanism for career progression, salary increases, and professional identity development for millions of workers.
Mechanisms of Hierarchical Flattening
Contemporary research documents several mechanisms through which AI contributes to organisational flattening. AI systems can analyse data, generate reports, and make certain operational decisions faster and more consistently than traditional management review processes, potentially reducing the need for multiple supervisory layers (McAfee & Brynjolfsson, 2017). For example, AI-driven analytics platforms can provide real-time performance monitoring, resource allocation recommendations, and strategic insights that previously required extensive managerial analysis and approval chains.
Research on organisational design suggests that AI enables more distributed decision-making processes, where front-line employees equipped with AI tools can handle complex judgements that previously required escalation through management hierarchies (Hamel & Zanini, 2020). This capability suggests that organisational value creation may increasingly flow from human-AI collaboration at operational levels rather than through traditional command-and-control structures.
The implications extend beyond individual role changes to affect entire organisational design principles. As AI systems become more sophisticated at handling routine supervisory tasks such as scheduling, performance monitoring, and resource allocation, traditional middle management roles face particular pressure to evolve or risk obsolescence (Colvin, 2015).
Career Advancement in Flattened Organisations
The traditional model of career advancement through hierarchical promotion faces fundamental challenges in AI-augmented organisations. Research suggests that career progression will increasingly emphasise lateral skill development, specialised expertise acquisition, and impact-based recognition rather than supervisory responsibility accumulation (Gratton, 2011). Some companies have implemented dual career tracks, allowing employees to advance through either traditional management hierarchies or technical expertise pathways with equivalent compensation and recognition structures (CrewHR, 2025; Indeed, 2025; Career Insider, 2025). This transformation requires workers to reconceptualise professional success in terms of knowledge mastery, problem-solving capability, and adaptability rather than positional authority.
New categories of roles are emerging that cut across traditional departmental boundaries and hierarchical levels. Examples include AI prompt engineers, human-AI interaction specialists, AI ethics officers, and AI system trainers. These positions often command competitive salaries and high organisational influence despite not fitting traditional management structures. Research on emerging job categories indicates that compensation in these roles correlates more strongly with technical expertise and AI-related skills than with traditional markers such as team size or budget responsibility (Manyika et al., 2017).
AI's impact on employment and pay is uneven across skills, industries, and regions, with significant variations observed in different sectors and geographic areas (Cazzaniga et al., 2024; Eloundou et al., 2024; PwC, 2025).
Organisations are experimenting with alternative career frameworks that emphasise continuous learning, project-based advancement, and competency development. Some companies have implemented "dual track" systems where employees can advance through either traditional management hierarchies or technical expertise pathways, with equivalent compensation and recognition structures. These models attempt to preserve advancement opportunities whilst acknowledging that AI may reduce the need for traditional supervisory roles (Cazzaniga et al., 2024).
Rethinking Career Success and High-Paying Employment
The fundamental question of how individuals can achieve economic success and professional fulfilment in an AI-dominated workplace requires examining both emerging opportunities and evolving definitions of career achievement. Research demonstrates that AI adoption creates complex, non-uniform effects on employment and compensation across different skill levels, industries, and geographic regions.
AI-Enhanced Career Pathways
Empirical research consistently demonstrates that workers who develop AI-related competencies command significant wage premiums. Research analysing online job postings found that positions requiring AI skills offer an 11% wage premium within the same firm and a 5% premium within the same job title compared to similar roles without AI requirements (Alekseeva et al., 2021). More recent analysis by PwC found that AI-skilled workers commanded an average 9%-15% wage premium in 2024, representing a substantial increase from previous years (PwC, 2025). This premium exists even within the same companies and job categories, suggesting that AI literacy itself has become a valuable and scarce skill.
The nature of high-value work is evolving toward human-AI collaboration rather than pure human effort. Research indicates that the most successful professionals in AI-integrated environments are those who can effectively leverage AI tools to amplify their capabilities whilst focusing their human efforts on areas where they provide unique value, such as creative problem-solving, emotional intelligence, strategic thinking, and complex interpersonal relationships (Eloundou et al., 2024).
Entrepreneurship and independent work arrangements represent increasingly viable pathways to economic success in the AI era. AI tools dramatically lower barriers to starting businesses by enabling individuals to accomplish tasks that previously required entire teams. For instance, Brynjolfsson & McAfee (2014) highlight how digital tools have empowered solo operators and small firms by providing access to resources and capabilities that were once exclusive to larger organizations. A single entrepreneur with access to AI can now handle functions such as market research, content creation, customer service, financial analysis, and marketing that once required specialised staff. This democratisation of business capabilities may lead to increased self-employment and small business creation, though it also intensifies competition as more individuals gain access to similar AI-enhanced capabilities (Brynjolfsson & McAfee, 2014; McAfee & Brynjolfsson, 2017; Eloundou et al., 2024).
Paradoxical Value of Human-Centric Work
Interestingly, some of the work least susceptible to AI automation historically has been undervalued economically, despite requiring distinctly human capabilities. Occupations emphasising interpersonal connection, emotional support, creative expression, and physical presence - such as education, healthcare, counselling, and personal services - may experience increased demand and potentially higher compensation as AI handles more routine cognitive tasks (Frey & Osborne, 2017).
This trend suggests a potential revaluation of work that leverages uniquely human capabilities. As AI demonstrates increasing competence in analytical and creative tasks, the economic value of authentic human connection, empathy, and physical presence may increase correspondingly. Colvin (2015) emphasizes that as machines become more capable of performing routine and even complex tasks, the uniquely human traits of empathy, creativity, and interpersonal skills will become increasingly valuable in the workforce. However, this revaluation is not guaranteed and would likely require conscious social and policy choices rather than automatic market adjustments (Colvin, 2015).
Differential Impacts Across Skill Levels and Industries
AI's transformation of work will not affect all workers, industries, or regions uniformly. Recent empirical research provides increasingly sophisticated understanding of which jobs face the greatest AI exposure and displacement risk, as well as which remain relatively protected by the nature of their tasks and requirements. For instance, Eloundou et al. (2024) provide empirical breakdowns of exposure risk by occupation, highlighting that information-processing roles face the highest immediate AI impact, while physical work requiring manual dexterity and interpersonal service roles remain largely protected (Eloundou et al., 2024).
High-Risk Occupations and Tasks
Research using large language model capabilities to assess job exposure reveals that information-processing roles face the highest immediate AI impact. Eloundou et al. (2024) found that occupations involving data analysis, content creation, routine research, and standardised communication face substantial AI substitution potential. Specific roles identified as highly exposed include market research analysts, basic accounting functions, customer service representatives, paralegal work, and entry-level financial analysis.
Significantly, this AI wave reaches into professional occupations previously considered secure from automation. For instance, Eloundou et al. (2024) provide empirical evidence showing that AI systems are increasingly used in legal research, where they can conduct legal research, draft contracts, and analyse case law with increasing sophistication. In medicine, AI supports diagnostic processes by analysing medical images and patient data to assist doctors in making accurate diagnoses. In education, AI tools are used to create personalized learning experiences and generate educational content. In software programming, AI can assist in code generation and debugging, enhancing the productivity of software developers. These advancements potentially reduce demand for junior professionals in these fields while augmenting the capabilities of senior practitioners (Eloundou et al., 2024).
Protected and Emerging Occupations
Conversely, research consistently identifies certain categories of work that remain largely protected from AI displacement. Physical work requiring manual dexterity, spatial reasoning, and adaptability to unpredictable environments continues to require human capabilities that current AI and robotics cannot replicate. Frey & Osborne (2017) report that skilled trades such as electrical work, plumbing, construction, and equipment maintenance face very low automation risk due to the complexity of physical problem-solving in variable environments (Frey & Osborne, 2017).
Interpersonal service roles requiring emotional intelligence, cultural sensitivity, and complex human interaction also demonstrate resistance to AI replacement. Healthcare support positions, education, counselling, and hospitality work require human presence and emotional connection that AI cannot authentically provide. Analysis of occupation-level AI exposure suggests that jobs emphasising human connection and physical presence remain among the safest from AI displacement (Brynjolfsson et al., 2023).
High-level strategic and creative work may benefit from AI augmentation without facing displacement. Acemoglu & Restrepo (2022) argue that AI complements high-skill labour rather than replacing it, suggesting that senior executives, strategic planners, creative directors, and research scientists may find that AI enhances their capabilities. The uniquely human aspects of their work - vision, leadership, ethical judgement, and innovative thinking - become more valuable as AI takes over routine tasks, allowing these professionals to focus on higher-order functions that require human insight and creativity (Acemoglu & Restrepo, 2022).
Geographic and Economic Variations
The pace and pattern of AI adoption vary significantly across regions and economic contexts. Advanced economies with high labour costs and strong digital infrastructure may experience faster AI adoption rates, particularly in service and knowledge work sectors (Cazzaniga et al., 2024). The International Monetary Fund estimates that approximately 60% of jobs in advanced economies and about 40% in emerging market economies face AI impact, compared to lower percentages in emerging markets where labour costs may slow automation adoption.
However, this geographic variation creates complex dynamics. Some emerging economies may experience "leapfrog" effects where AI adoption occurs rapidly without gradual transition periods, potentially creating more severe displacement effects. Additionally, the global nature of AI deployment means that work can be reorganised across geographic boundaries, with AI-augmented workers in lower-cost regions potentially competing with workers in higher-cost areas (Cazzaniga et al., 2024).
Economic Inequality and Distributional Consequences
Perhaps the most critical long-term implication of AI workplace transformation concerns its potential to exacerbate or alleviate economic inequality. Current research suggests that without conscious intervention, AI adoption may significantly worsen income and wealth distribution both within and between countries (Acemoglu & Restrepo, 2022).
Mechanisms of Inequality Amplification
However, this geographic variation creates complex dynamics. In some emerging economies, AI adoption may occur rapidly without gradual transition periods, potentially amplifying workforce displacement risks (Cazzaniga et al., 2024). The report highlights that limited infrastructure and digital skills can hinder preparedness, increasing the likelihood of disruption. Moreover, the global nature of AI deployment enables task reallocation across borders, with AI-augmented workers in lower-cost regions increasingly competing with those in higher-cost economies - further complicating global workforce dynamics (Cazzaniga et al., 2024).
Second, access to AI tools and training creates potential 'AI divides' like historical digital divides. Organisations and individuals with resources to access cutting-edge AI capabilities gain significant competitive advantages, whilst those without such access may find themselves increasingly disadvantaged. This dynamic could amplify existing inequalities between large and small enterprises, well-funded and resource-constrained educational institutions, and wealthy and low-income individuals. Acemoglu & Restrepo (2022) provide evidence that automation has contributed to rising wage inequality in recent decades, reinforcing the idea that unchecked AI adoption could exacerbate economic disparities (Acemoglu & Restrepo, 2022; OECD, 2023).
Research examining technology adoption patterns suggests that the benefits of AI implementation may concentrate among those already economically advantaged, whilst the costs - such as job displacement and skills obsolescence - may disproportionately affect vulnerable populations (OECD, 2023). This pattern could exacerbate existing socioeconomic disparities unless deliberate policy interventions ensure more equitable distribution of AI's benefits.
International and Development Implications
The global implications of AI inequality may be particularly severe. The International Monetary Fund warns that AI could worsen inequality both within and between countries, as developed economies with advanced AI capabilities gain competitive advantages over those with limited AI access (Cazzaniga et al., 2024). This dynamic could reverse some of the globalisation trends that enabled emerging economies to compete through lower labour costs, as AI-augmented productivity in advanced economies reduces the importance of labour cost differentials.
Conversely, some research suggests potential for AI to reduce certain forms of inequality by democratising access to expertise and capabilities previously available only to well-resourced organisations and individuals. OECD (2023) acknowledges that AI has the potential to widen access to services if properly managed. For instance, AI tutoring systems could provide high-quality education to underserved populations, AI-powered diagnostic tools could extend medical expertise to remote areas, and AI business tools could enable small entrepreneurs to compete more effectively with larger enterprises (OECD, 2023).
Policy and Intervention Opportunities
The distributional outcomes of AI adoption are not predetermined but depend heavily on policy choices and institutional responses. Research suggests several potential intervention strategies to ensure broader benefit distribution. OECD (2023) emphasizes the importance of public investment in AI literacy and retraining programs to manage AI impacts equitably. The document also highlights the need to address concentration risks in AI capabilities, although it does not explicitly call for antitrust enforcement as a specific policy measure (OECD, 2023).
The success of such interventions likely depends on their implementation before AI adoption becomes fully entrenched. Historical analysis of technological transitions suggests that the institutional frameworks established during periods of rapid technological change tend to persist and shape long-term distributional outcomes. Acemoglu & Johnson (2024) emphasize that the institutional frameworks established early in a technological transition have lasting impacts on the distributional outcomes, reinforcing the urgency around developing inclusive AI policies before current trends become irreversible (Acemoglu & Johnson, 2024).
Implications for Vulnerable Populations
The potential impacts of AI transformation on economically vulnerable populations - including those with limited education, few resources, and precarious employment - represent perhaps the most pressing social challenge of the AI era. Autor (2015) highlights that lower-educated workers are less able to adapt to automation, making them particularly vulnerable to technological displacement. Without proactive intervention, these groups risk bearing disproportionate costs of technological transition whilst receiving minimal benefits (Autor, 2015).
Educational and Skills Gaps
Research consistently demonstrates that educational attainment correlates strongly with ability to adapt to AI-driven workplace changes. Workers with college education and technical training are more likely to transition successfully to AI-complementary roles, whilst those with high school education or less face greater displacement risk without clear pathways to comparable alternative employment (Autor, 2015). Analysis of labour market transitions during previous technological shifts suggests that educational disparities in adaptation outcomes may be particularly pronounced with AI due to its broad applicability across skill levels.
This educational divide interacts with other forms of disadvantage to create compound vulnerabilities. Workers in routine occupations - which tend to employ higher percentages of people from racial minorities, women returning to the workforce, and those without college degrees - face both direct displacement from AI and barriers to accessing retraining programmes that could facilitate transitions to new roles. Lund et al. (2021) show that routine roles, which disproportionately employ women and minorities, were significantly disrupted during the COVID-19 pandemic, highlighting the increased vulnerability of these groups to technological displacement (Lund et al., 2021).
Access and Training Barriers
Even when retraining opportunities exist, vulnerable populations often face multiple barriers to participation. These include financial constraints that prevent unpaid training participation, caregiving responsibilities that limit time availability, geographic isolation from training centres, and educational prerequisites that exclude those who most need assistance. Card et al. (2018) show that dismantling these structural barriers is critical to the success of training programs. Their research highlights that addressing financial constraints through stipends or paid training, providing flexible scheduling to accommodate caregiving responsibilities, offering remote or local training options to overcome geographic isolation, and reducing educational prerequisites can significantly improve participation rates and outcomes for vulnerable populations (Card et al., 2018).
Promising examples include programmes that provide income support during training, offer childcare assistance, deliver training in multiple languages and formats, and actively recruit participants from affected communities rather than waiting for voluntary enrolment. International evidence on workforce development programmes demonstrates that well-designed interventions can achieve significant earnings increases for participants, but only when they address comprehensive barriers to participation (Card et al., 2018).
Social Safety Net Implications
The scale and speed of potential AI-driven displacement may overwhelm existing social safety net systems designed for temporary unemployment rather than structural economic transformation. Frey (2019) argues that addressing AI's impact on vulnerable populations may require a fundamental redesign of social support systems. This could potentially include universal basic income, job guarantees in sectors resistant to automation, or significant expansion of existing unemployment and retraining benefits to manage the impacts of AI displacement (Frey, 2019)
The political feasibility of such expansive social programmes remains uncertain, particularly in countries with limited social welfare traditions. However, the alternative of widespread economic displacement without adequate support systems could create significant social instability and undermine the political sustainability of AI adoption itself. Frey (2019) examines how inadequate social policy during earlier technological transitions led to instability and social unrest, reinforcing the need for robust welfare responses to manage the impacts of AI and automation (Frey, 2019).
Conclusion: Toward Inclusive AI Implementation
The transformation of work through artificial intelligence represents both an unprecedented opportunity and a fundamental challenge for contemporary society. Our analysis reveals that current trajectories toward AI implementation risk creating significant social divisions between those who benefit from AI augmentation and those who face displacement without adequate support or alternative pathways to economic security.
The resistance we observe in contemporary workplaces reflects legitimate concerns about the distribution of AI's benefits and costs. Workers are responding rationally to implementation strategies that prioritise efficiency gains over human welfare, just as the Luddites responded to mechanisation that enriched factory owners whilst impoverishing skilled craftspeople.
Acemoglu & Johnson (2024) emphasize that the social consequences of technological change depend not on the technology itself, but on the institutional frameworks and power structures that govern its implementation. They argue that the implementation frameworks established early in a technological transition have lasting impacts on the distributional outcomes, reinforcing the importance of developing inclusive AI policies before current trends become irreversible (Acemoglu & Johnson, 2024).
Research evidence suggests several principles for more inclusive AI adoption. First, transparency and participation in implementation decisions significantly improve worker acceptance and outcomes. Soulami et al. (2024) found that organisations that involve employees in AI tool selection, provide comprehensive training, and clearly articulate augmentation rather than replacement strategies achieve better results for both productivity and worker welfare. Their study highlights the importance of transparency and participation in fostering a supportive work environment and enhancing employee well-being (Soulami et al., 2024).
Second, the benefits of AI-driven productivity gains must be broadly shared rather than concentrated among technology owners and highly skilled workers. OECD (2023) recommends policies such as progressive taxation, strengthened social safety nets, massive public investment in education and retraining, and antitrust enforcement to prevent excessive market concentration to ensure equitable implementation of AI (OECD, 2023)
Third, society must recognise and value the irreplaceably human aspects of work - creativity, empathy, ethical judgement, and interpersonal connection - rather than treating all labour as subject to technological optimisation. OECD (2023) recommends that AI gains must be shared via policy interventions to ensure equitable implementation. This may require conscious cultural and economic revaluation of work that emphasises human dignity and social contribution over pure efficiency metrics (OECD, 2023; Colvin, 2015).
The choices made during this transitional period will largely determine whether AI becomes a tool for broadly shared prosperity or a driver of increased inequality and social division. The resistance we observe today represents an opportunity for course correction toward more inclusive implementation strategies. By learning from both historical technological transitions and contemporary research on AI workplace impacts, we can work toward an AI-augmented future that enhances rather than threatens human flourishing across all segments of society.
The question is not whether AI will transform work - that transformation is already underway. The question is whether we will guide that transformation toward outcomes that serve broad social welfare or allow it to proceed according to narrow efficiency criteria that may undermine the social foundations upon which technological progress ultimately depends.
References
Acemoglu, D., & Johnson, S. (2024). Learning from Ricardo and Thompson: Machinery and labor in the early Industrial Revolution and in the age of artificial intelligence. Annual Review of Economics, 16, 597-621.
Acemoglu, D., & Restrepo, P. (2022). Tasks, automation, and the rise in U.S. wage inequality. Econometrica, 90(5), 1973-2016.
Alekseeva, L., Azar, J., Gine, M., Samila, S., & Taska, B. (2021). The demand for AI skills in the labor market. Labour Economics, 71, 102002.
Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3-30.
Autor, D. H., Levy, F., & Murnane, R. J. (2003). The skill content of recent technological change: An empirical exploration. The Quarterly Journal of Economics, 118(4), 1279-1333.
Autor, D. H. (2018). Work of the past, work of the future (NBER Working Paper No. 24871). National Bureau of Economic Research.
Bakker, A. B., & Demerouti, E. (2017). Job demands–resources theory: Taking stock and looking forward. Journal of Occupational Health Psychology, 22(3), 273–285.
Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company.
Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at work. NBER Working Paper No. 31161. National Bureau of Economic Research.
Card, D., Kluve, J., & Weber, A. (2018). What works? A meta analysis of recent active labor market program evaluations. Journal of the European Economic Association, 16(3), 894-931.
Career Insider. (2025). Implementing Dual Career Ladders for Organizational Success. Retrieved from
Cazzaniga, M., Jaumotte, F., Li, L., Melina, G., Panton, A. J., Pizzinelli, C., Rockall, E. J., & Tavares, M. M. (2024). Gen-AI: Artificial intelligence and the future of work. International Monetary Fund Staff Discussion Note SDN/2024/001.
Colvin, G. (2015). Humans are underrated: What high achievers know that brilliant machines never will. Portfolio.
CrewHR. (2025). Dual Career Ladder/Track. Retrieved from
Deloitte AI Institute. (2024, January 15). The state of generative AI in the enterprise: Now decides next (Q1 findings). Deloitte Development LLC.
Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and the self-determination of behaviour. Psychological Inquiry, 11(4), 227–268.
Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350-383.
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2024). GPTs are GPTs: An early look at the labor market impact potential of large language models. Science, 384(6702), 1306. ,
Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254-280.
Frey, C. B. (2019). The technology trap: Capital, labor, and power in the age of automation. Princeton University Press.
Gratton, L. (2011). The shift: The future of work is already here. HarperCollins.
Hamel, G., & Zanini, M. (2020). Humanocracy: Creating organizations as amazing as the people inside them. Harvard Business Review Press.
Indeed. (2025). What is a Dual Career Path? Retrieved from
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–292.
Kotter, J. P., & Schlesinger, L. A. (2008). Choosing strategies for change. Harvard Business Review, 86(7/8), 130-139.
Lund, S., Madgavkar, A., Manyika, J., Smit, S., Ellingrud, K., Meaney, M., & Robinson, O. (2021). The future of work after COVID-19. McKinsey Global Institute.
Manyika, J., Lund, S., Chui, M., Bughin, J., Woetzel, J., Batra, P., Ko, R., & Sanghvi, S. (2017). Jobs lost, jobs gained: Workforce transitions in a time of automation. McKinsey Global Institute.
McAfee, A., & Brynjolfsson, E. (2017). Machine, platform, crowd: Harnessing our digital future. W. W. Norton & Company.
Mokyr, J. (2016). A culture of growth: The origins of the modern economy. Princeton University Press.
OECD. (2023). OECD employment outlook 2023: Artificial intelligence and the labour market. OECD Publishing.
PwC. (2025, June 3). 2025 Global AI Jobs Barometer. PwC.
Sale, K. (1995). Rebels against the future: The Luddites and their war on the Industrial Revolution. Perseus Publishing.
Soulami, M., Benchekroun, S., & Galiulina, A. (2024). Exploring how AI adoption in the workplace affects employees: A bibliometric and systematic review. Frontiers in Artificial Intelligence, 7, 1473872.
Thompson, E. P. (1963). The making of the English working class. Victor Gollancz.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425-478.
Writers Guild of America. (2023). Summary of the 2023 WGA MBA.
Writer & Workplace Intelligence. (2025, March 18). Generative AI adoption in the enterprise: 2025 survey key findings. Writer.com.