PM Guide

AI

Artificial intelligence has gone from a specialist technology to a daily working tool faster than almost any previous technology in business history. Within two years of OpenAI releasing ChatGPT in November 2022, generative AI had been integrated into project management software, productivity suites, communication platforms, and recruitment processes that millions of people use every day. For project managers in Irish SMEs and charities, the question is no longer whether AI is relevant to their work, it already is, but how to use it well, what to be cautious about, and what obligations it creates.

Why it matters

The scale of AI's potential impact is not modest. PwC's Sizing the Prize report estimated that AI could contribute up to $15.7 trillion to the global economy by 2030 (PwC, 2017). Microsoft and Trinity College Dublin's joint report published in March 2025 projected that AI adoption could add at least €250 billion to Ireland's economy by 2035, with supportive policies potentially adding a further €60 billion (Microsoft Ireland and Trinity College Dublin, 2025). These projections should be treated with the appropriate scepticism that attaches to long-range economic modelling, but the direction of travel is clear: AI is a macro-level force, not a niche tool.

At the individual and team level, PMI's 2024 research on generative AI in project management found that "trailblazers", project professionals who are high adopters of GenAI tools, reported being 93% more productive than low adopters, and showed higher performance on problem-solving, creativity, and documentation tasks (PMI, 2024). The same research noted that the number of project managers using GenAI for 50% or more of their projects increased by 86% within a three-month tracking period. These are significant gains, though they should be read alongside a counterbalancing finding: only around 20% of project managers report having extensive or good practical AI knowledge (PMI, 2023).

Core concepts

Artificial intelligence, in its current practical form, refers to systems that use machine learning, and particularly deep learning, to analyse data, identify patterns, and generate outputs without being explicitly programmed for every task (OECD, 2019). The term covers a wide spectrum, from narrow systems that do one thing well (a demand forecasting algorithm, a spam filter) to the general-purpose large language models (LLMs) such as GPT, Claude, and Gemini that can generate coherent text, code, analysis, and creative content across almost any domain.

Generative AI (GenAI) is the subset that has most directly entered project management workflows. Its most common project applications include drafting and summarising documents (meeting notes, status reports, stakeholder updates), generating initial risk register entries or requirements lists, analysing project data for scheduling and resource insights, and producing first drafts of communications. These applications reduce the time cost of routine cognitive labour. the part of a project manager's week that is important but not high-value.

What GenAI cannot do is equally important to understand. It cannot exercise genuine judgement about what matters to stakeholders it has not met, navigate the political dynamics of a specific team, assess whether a plan is right in the context of a particular organisation's culture, or take accountability for a recommendation it has made. PMI researchers frame this with a useful two-axis model: tasks that are low in complexity and require minimal human judgement can be delegated substantially to AI; tasks that are high in complexity or ethical sensitivity require significant human involvement even when AI assists with information processing (PMI, 2023). Knowing where any given task sits on that spectrum is a core skill for modern project managers.

Automation bias the documented tendency of people to over-rely on algorithmic outputs and discount contradictory human judgment, is one of the most important risks of AI adoption in project environments (Cummings, 2017). When AI generates a risk assessment or a schedule, there is a human cognitive pull toward accepting it because it looks authoritative and complete. The antidote is discipline: AI outputs must be treated as drafts and starting points, not finished products.

The Irish and EU context

Ireland is at a structurally interesting position in the global AI landscape. The country hosts the European headquarters of most major AI technology companies, Google, Microsoft, Meta, and Amazon among them, as well as a growing ecosystem of AI-native startups. The CSO's 2025 Business in Ireland statistics found that 20.2% of Irish enterprises used AI technologies in 2025, more than doubling from 8.1% in 2023, with Ireland on par with the EU average of 20% but significantly behind EU leaders Denmark (42%) and Finland (38%) (CSO, 2025). Adoption is sharply uneven: large enterprises (57.7%) are far ahead of small ones (17.2%).

For small organisations, the practical gap is real. A small charity does not have an AI implementation team; it has a staff of five and a board meeting once a month. Enterprise Ireland's "Digital Discovery" grant provides up to €5,000 for SMEs to work with knowledge providers on digital and AI feasibility, a tangible starting point that many organisations have not used (Enterprise Ireland, 2024). Ireland's National AI Strategy, originally published in 2021 and refreshed in 2024, explicitly committed to a national campaign to raise awareness among SMEs of AI adoption supports (Department of Enterprise, Tourism and Employment, 2024). Ireland appointed its first-ever Minister for Artificial Intelligence in January 2025, a signal of the government's seriousness about the policy agenda.

The most immediately material development for any Irish organisation using AI is the EU AI Act (Regulation (EU) 2024/1689), which entered into force on 1 August 2024. The Act takes a risk-based approach, categorising AI systems as unacceptable risk (prohibited outright), high risk (requiring strict compliance obligations), limited risk (transparency requirements), or minimal risk (no obligation, though codes of conduct are available) (European Commission, 2024). From 2 February 2025, AI practices deemed to pose unacceptable risk, including social scoring systems, real-time biometric surveillance in public spaces, and certain forms of subliminal manipulation, are prohibited. The bulk of obligations for high-risk systems, including AI used in employment and workers' management decisions, will apply from 2 August 2026. For an Irish SME or charity using an AI recruitment or screening tool, or an AI-assisted performance management system, high-risk compliance is not a future consideration. It is an active planning task.

Common pitfalls

Three failure patterns are emerging consistently across sectors. The first is uncritical delegation: using AI to draft outputs and submitting them without substantive human review. The product of this is project documentation and stakeholder communications that are grammatically fluent and factually thin. AI hallucinates confidently produces plausible-sounding but incorrect information, and a project manager who treats AI output as checked output will eventually discover the error in front of a funder or a board. The second is governance vacuum: adopting AI tools without clear organisational policies about what data can be entered into which systems, who reviews AI-generated outputs before they are used, and how AI use is disclosed to clients and beneficiaries. Employees typically begin using AI tools spontaneously; Microsoft's research found that 80% of Irish organisations reported staff using free AI tools without enterprise security controls in 2025 (Microsoft Ireland and Trinity College Dublin, 2025). The third is treating AI as a junior staff member rather than a sophisticated drafting and analysis tool. AI does not have context, stakes, or judgment. Giving it responsibility rather than tasks is the category error that most often produces poor outcomes.

Watch / Listen / Read

Watch, How AI can save our humanity by Kai-Fu Lee (TED2018, 15 min). The former president of Google China and AI researcher argues that AI will eliminate routine work but cannot replicate human empathy and creativity, and that this is, ultimately, an opportunity to refocus on what is distinctively human about our work. Available at https://www.ted.com/talks/kai_fu_lee_how_ai_can_save_our_humanity.

Listen, Projectified® by PMI features multiple episodes on AI and generative AI in project management. Available at https://www.pmi.org/projectified-podcast; search "AI" or "generative AI" for episodes covering practical adoption, risks, and team dynamics.

Read, PMI (2024) Pushing the Limits: Transforming Project Management with GenAI Innovation. Newtown Square, PA: PMI. Available at link. The most direct evidence base for understanding where AI is already delivering productivity gains in project management, and where human judgment remains non-negotiable.

Quick quiz

  1. What does the EU AI Act's risk-based approach identify as its four risk categories for AI systems?
  2. According to CSO 2025 data, what percentage of Irish enterprises were using AI technologies, and what was the equivalent figure in 2023?
  3. PMI's 2024 research found that "trailblazers", high GenAI adopters, were how much more productive than low adopters?
  4. What is "automation bias" and why is it particularly relevant to AI use in project management?
  5. From what date were prohibited AI practices (unacceptable risk) under the EU AI Act first applicable?

Answers: (1) Unacceptable risk (prohibited), high risk, limited risk, and minimal risk. (2) 20.2% in 2025, up from 8.1% in 2023. (3) 93% more productive than low adopters. (4) The tendency to over-rely on algorithmic outputs and discount contradictory human judgment, relevant because AI-generated project documents look authoritative and complete, making uncritical acceptance cognitively easy. (5) 2 February 2025.

References

Central Statistics Office (2025) Business in Ireland 2025: Digital Sustainability and Artificial Intelligence Adoption. Cork: CSO. Available at: link (Accessed: 27 April 2026).

Cummings, M.L. (2017) 'Automation Bias in Intelligent Time Critical Decision Support Systems', in AIAA 1st Intelligent Systems Technical Conference. Reston, VA: AIAA.

Department of Enterprise, Tourism and Employment (2024) National AI Strategy Refresh 2024: AI, Here for Good. Dublin: Government of Ireland. Available at: https://enterprise.gov.ie/en/publications/national-ai-strategy-refresh-2024.html (Accessed: 27 April 2026).

Enterprise Ireland (2024) Digital Discovery grant. Available at: https://www.enterprise-ireland.com/en/innovation/access-advice/ (Accessed: 27 April 2026).

European Commission (2024) Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (AI Act). Official Journal of the European Union, 12 July. Available at: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (Accessed: 27 April 2026).

Microsoft Ireland and Trinity College Dublin (2025) The AI Economy in Ireland 2025. Dublin: Microsoft Ireland. Available at: link (Accessed: 27 April 2026).

OECD (2019) Recommendation of the Council on Artificial Intelligence. Paris: OECD. Available at: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 (Accessed: 27 April 2026).

Project Management Institute (2023) Shaping the Future of Project Management With AI. Newtown Square, PA: PMI. Available at: link (Accessed: 27 April 2026).

Project Management Institute (2024) Pushing the Limits: Transforming Project Management with GenAI Innovation. Newtown Square, PA: PMI. Available at: link (Accessed: 27 April 2026).

PwC (2017) Sizing the Prize: What's the real value of AI for your business and how can you capitalise? London: PricewaterhouseCoopers. Available at: link (Accessed: 27 April 2026).