There’s a difference between driving a car, knowing how it works, and managing a whole fleet. In the same way, people may use technology proficiently without understanding how it works or making thoughtful choices. A digitally skilled worker also needs the ability to step back, reflect, and make mindful choices, otherwise they are just a ‘worker,’ swiping like a TikTok user.

This matters even more for leaders, who make technology choices not just for themselves but for others and have a disproportionate influence in the workplace. I recently contributed to a new report from CEMS – Augmented Leadership – highlighting the need to develop mindful leaders who understand both how technology works and how to use it responsibly. With the dawn of GenAI, this is crucial: it functions differently from traditional digital tools, so leaders must go beyond prompting a chatbot to grasp what’s happening under the hood.

There are many misconceptions about AI that must be addressed through teaching and training. Here are just five of these key truths that leaders and future leaders need to understand:

Truth 1: AI isn’t a colleague

People think AI learns continuously from conversation. It doesn’t. Once trained, it’s static and to improve, it must be retrained. Unlike a human coworker, you can’t performance-manage a chatbot if it is underperforming!

AI is probabilistic and behaves in ways we’re not used to from computers. Chatbots hallucinate, they are not accurate by default. They generate predictions. Often these are useful, sometimes they are wrong, but they are always convincing. AI doesn’t “understand” like a human does. It can make bizarre mistakes no human colleague would yet also digest a complex 100-page report in seconds.

Leaders must step back and remember this is a tool, not a trustworthy colleague. The challenge is to balance those strengths and weaknesses to benefit the organisation.

At the University of Sydney we’ve provided much-needed upskilling to more than 3,000 leaders through our AI Fluency Sprint – an online short course for executives. Together with the Australian Institute of Company Directors, we’ve also trained 500 board directors with a focus on risk and governance.

Truth 2: The more critical your thinking, the more effective you will be with AI

The overriding fear is AI may erode creativity, but in reality, creativity, experience and critical thinking makes you more effective with it.

Young people may adopt AI quickly, but experienced workers often use it better. Prompting AI is much like delegating, as you must frame a task with context. Without it, outputs are shallow. Judging quality also depends on experience. Senior workers know what “good” looks like and refine prompts accordingly, while juniors can tend to accept outputs at face value.

This is why senior employees often get more value from AI, while juniors need coaching. The idea that digital natives will “just figure it out” doesn’t hold here as the real skills needed are task and judgment skills not AI-specific tricks. Skills in how the world works.

Truth 3: AI is coming for your job description, not your job

AI won’t replace most workers one-to-one, but it will reshape jobs. We’ll do less writing and more reviewing, curating, and iterating. These tools let us test ideas quickly, but judgment and discernment will be increasingly essential to decide what’s best.

Some routine tasks may simply disappear, but the question for me is, if AI can do them, should they exist in the first place? In that sense, AI might help us rethink not just how we work, but the structure of work itself.

Truth 4: The basic principles of AI haven’t changed for 40 years

Technology moves so fast – so how is it possible to unpack how these tools work within training? In fact AI’s basic principles haven’t changed much in the last 40 years. The idea of pattern prediction and learning goes back to the 1950s, with early systems in the 1980s. Since then, progress has been incremental, with faster computing, better algorithms, stronger products, however the foundations remain the same.

Truth 5: Employers won’t hire graduates for prompting skills

Professionals must know when AI helps and when they must think for themselves. Simply outsourcing thought is “vibe working” – coasting on outputs. A graduate whose only skill is prompting a chatbot adds no value to a prospective employer as they could just ask the chatbot themselves.

That’s why organizations and universities alike must use AI to build expertise and enhance critical thinking. AI can function as a Socratic tutor, asking reflective questions while students and early career professionals do the thinking. Assessments and training should evaluate judgment and creativity, ensuring employees bring real value.

At the University of Sydney, we’ve launched a mandatory “two-lane” approach. In one lane, we assume students use AI and ask them to reflect on it – it would be naïve to think otherwise. However in the second lane we have designed supervised oral, practical and written exams where AI simply cannot be used, to assess genuine skills.

Looking ahead to 2030

Five years is a long time ahead to predict. Who could have predicted our current reality in 2020?

Whatever the future holds, I do hope that by 2030 we better understand that human and machine intelligence are fundamentally different kinds.

The real opportunity will be to figure out how the two complement each other, rather than imagine that one will replace the other.


This article was originally published by CEOWORLD Magazine. Read the original article.