Does the prospect of implementing AI technologies at your small or medium-sized enterprise fill you with an uncomfortable mix of confusion and alarm?
Maybe you’ve heard or read something that left you fearful of AI and its capabilities. Or maybe you’re not fearful at all and eager to adopt AI but simply don’t have the time to find a solution that works, or the skills to use it effectively.
Whatever you may be feeling or experiencing about AI and its use, McKenzie Lloyd-Smith can sympathize and relate. Lloyd-Smith is the founder of MindPort, a Toronto-based AI consultancy with clients across the globe.
Born and raised in the United Kingdom, Lloyd-Smith obtained his PhD in Management at the University of London. He moved to Toronto for a postdoctoral fellowship, working on a team supporting AI implementation in healthcare.
The project had clear positive benefits for doctors and patients alike, reducing workloads while delivering treatments more quickly and with better outcomes. Even so, Lloyd-Smith recalls an “aversion” from both doctors and patients when it came to the new technology.
“There was a sense of ‘Is this going to take my job?’ or ‘Who’s to blame if the technology goes wrong?,” Lloyd-Smith recalls. “These are things we’re now very familiar with in terms of the conversation around AI. At that moment I realized this was a big area to spend time thinking about because it was only going to become increasingly common.”
While MindPort helps clients get the most out of their AI adoption, Lloyd-Smith insists he’s not an “AI optimist” who blindly advocates for indiscriminate use of these powerful new tools.
“If you think AI is the solution, then you’ll think everything can be done better with AI,” Lloyd-Smith says. “I’m a realist, and I come at all this from a human-centric perspective. At MindPort, we look to enhance the experience of work and life for people using AI, as opposed to replacing people and automating them out of jobs. We take the perspective of ‘What is the problem?’ and then ask ‘Is AI the right solution?’”
Another part of being an AI realist, Lloyd-Smith says, is encouraging business owners to allow AI use at work rather than banning it. Besides the obvious likelihood of losing the competitive edge to any business rivals who do benefit from AI tools, he cites international research that shows workers will use AI on the job whether they’re authorized to or not.
“Having a policy of ‘Don’t use it!’ is detrimental,” Lloyd-Smith explains. “It’s much better to allow use but put in some guardrails to say ‘This is where it’s appropriate, this is where it’s not,’ and make it clear who people need to speak to if they’re not sure.”
The good news for SMEs who are curious about AI adoption, Lloyd-Smith says, is that “the barrier to entry is now almost nil.” Many tools are available for free, while others offer affordable monthly subscription models.
“The biggest two barriers are time, in terms of finding the right product, and just not understanding what AI is capable of doing,” Lloyd-Smith says. “That comes down to not experimenting and not having the requisite knowledge.”
To guide SMEs who need some help acquiring that knowledge, Lloyd-Smith offers his three “core pillars” of responsible AI exploration and adoption.
The first, strategic alignment, begins with asking yourself two important questions about your desire to implement AI. First, why are we doing this? And second, how does it tie in to our broader strategy? By and large, your answers should be aligned with the organization’s existing goals. You’ll also need to allocate a budget to AI exploration, and the investment required to adopt any new technology.
The second pillar is skills and training. Lloyd-Smith says it starts with ensuring a baseline level of digital skills across your organization, supplemented by specific training and education opportunities.
“You need a willingness to experiment and a willingness to learn,” he says. “There should be someone who is an AI champion and is always trying to keep on top of things, because this stuff is moving quickly. Who’s the one who is excited about this and constantly reading the news about AI, what it can do and what’s the latest thing?”
The third and final pillar is policy and governance, which means clearly defining your organization’s user policies and acceptable use cases around AI.
As Lloyd-Smith points out, privacy issues mean it could be inappropriate to enter personal information, such as resumes, into the large language models that power generative AI tools such as ChatGPT. Further, such tools “often echo or even amplify societal and cognitive biases,” making them highly unsuitable for hiring decisions. In other instances, generative AI tools may “hallucinate,” spitting out content that sounds convincing but isn’t accurate.
“It’s about helping people understand where they should and shouldn’t use the technology, and what it can and can’t do,” Lloyd-Smith says of setting appropriate policies. “AI should be approached with caution in ethical grey areas or scenarios involving important decision-making. It’s not a definitive source of truth and should be treated as a tool, not an authority. When the level of importance of the work you’re doing increases, the reliance on the technology should, to some degree, decrease. Ultimately, it’s essential to establish clear policies that maintain human oversight and accountability.”
Visit the link to learn more about MindPort Inc.
Visit the MagnetAI website for more information and resources to support AI adoption at your businesses.
Similar to Lloyd-Smith, Magnet’s Executive Director Mark Patterson also advocates for an approach to AI adoption that emphasizes learning new skills and improving the experience of professionals rather than replacing them. Read more of Mark’s thoughts here.