Monthly Archives: May 2026

When AI Joins the Project Team: What Organisations Need to Get

Generative AI is quickly becoming part of everyday project work. Tools such as ChatGPT, Gemini and Microsoft Copilot are now being used to draft reports, support planning, summarise information, analyse risks and improve communication. For many organisations, the attraction is clear: faster processes, better access to information and more support for decision-making.

But a new study by CBISS member Dr Geshwaree Huzooree reminds us that adopting Generative AI is not simply a matter of introducing a new tool. It changes how people work, how decisions are made, how teams collaborate and how responsibility is shared.

The paper reviews existing research on Generative AI in project management and argues that successful adoption depends on alignment between technology, people, processes, culture and governance. In other words, AI works best when organisations treat it as part of a wider system, rather than as a quick technical fix.

One of the key insights from the study is that Generative AI does not remove the need for human judgement. Instead, it changes where human judgement is needed. Project professionals may spend less time producing routine documents from scratch, but more time checking, interpreting and improving AI-generated outputs. This means skills such as critical thinking, ethical awareness, communication and accountability become even more important.

The study also highlights that AI adoption happens at different levels. At the organisational level, leaders need clear policies, data protection measures, ethical guidance and a realistic investment plan. At the project level, AI needs to fit with existing planning, reporting, risk management and stakeholder engagement processes. At the team level, staff need training and confidence to use AI responsibly, without becoming over-reliant on it.

This is where many organisations may face difficulty. It is easy to focus on the technology itself, but harder to redesign work around it. If AI is introduced without clear oversight, it can create confusion over accountability, increase the risk of errors, or weaken professional learning. For example, an AI-generated risk report may look convincing, but still require careful checking by someone who understands the project context.

The message for practice is therefore clear: organisations should not ask only “Which AI tool should we use?” They should also ask, “How will this change the way we govern, manage and learn from our projects?”

For project managers, the research offers a useful reminder that AI should be seen as a collaborative support system, not a replacement for professional expertise. Used well, it can reduce administrative burden, improve access to knowledge and support more informed decision-making. Used poorly, it can create new risks around trust, quality and responsibility.

Although the paper focuses on project management, its lessons apply more widely. Any organisation adopting AI needs to think carefully about people, culture, skills and governance. The real challenge is not simply whether AI can produce faster outputs, but whether organisations can use those outputs wisely.

At CBISS, this research speaks directly to our interest in responsible innovation and future workforce transformation. It shows that the future of AI-enabled work will depend not only on smarter technologies, but also on smarter organisational design.