How to Implement AI in the Workplace: A Problem-First Framework
By Ridge Carpenter, AI Product Manager, Kelly
Key takeaways
Identify your biggest pain points first. Map business problems before evaluating any AI tools; avoid solution-first thinking that leads to wasted resources.
Distinguish adoption from effective usage. High usage rates don't guarantee ROI; strategic deployment in targeted use cases outperforms blanket implementation.
Apply the "AI helps, you decide" framework. Use AI for summarizing, searching, drafting, and classifying - but keep final decision-making with humans.
Measure impact with clear benchmarks. Define success metrics upfront: time saved, accuracy improved, costs reduced - then track against baselines.
Balance efficiency with skill preservation. Amplify human capabilities without eroding fundamental workplace competencies like critical thinking and communication.
Organizations must keep people at the center of all workforce strategies - from AI adoption to outsourcing - because technology alone won't solve hiring challenges in 2026.
Most organizations are asking the wrong question about AI in the workplace.
Everyone wants to know: "How do we implement AI tools?"
But they should be asking: "What problems are causing us the most pain right now, and which of these problems could be addressed through AI?”
As product manager for AI at Kelly, I’ve watched companies rush to adopt artificial intelligence without addressing the fundamental disconnect between intention and execution. My background spans machine learning at Amazon—back when we said “neural networks” more often than “AI”—through the generative AI revolution that reshaped how we think about these tools. One lesson has stayed consistent: successful AI adoption isn’t solely about the technology; it’s about how humans interact with it.
The Kelly Global Re:work Report underscores this point. Nearly seven in ten global executives believe refusing to adopt AI is a bigger threat to workers’ jobs than AI itself. Nearly half of global workers agree—they don’t see AI as the threat, but rather someone who masters AI. In this pervasive environment of rapid adoption, both executives and workers know it’s fluency that makes the difference. Yet the data also shows a disconnect: among business professional and industrial executives, only 4% say their organization has no AI initiatives while almost a quarter of business professional and industrial workers say the same. On paper, AI in the workplace looks to be taking off. In practice, many workers still don’t see or feel it - and they won’t unless AI is aimed at real problems it can truly solve.
We're in an AI gold rush - don't buy every shovel.
This gap exists because we’re in an AI gold rush. And like all gold rushes, the people selling shovels—vendors—are often doing better than the prospectors. Organizations are under pressure to adopt “any AI” without asking whether it solves a real problem.
The result? Many companies become guinea pigs, learning through painful trial and error where AI fits. If you’re not rushing to be first, you can actually benefit from those bumps. I like to think of it as walking through a dark room: if someone else has already smacked their shins on the furniture, you can avoid it.
But too many organizations aren’t learning. Instead of starting with clear problems, they’re practicing what I call solution-first thinking—buying tools in search of a use case. That path almost always leads to wasted time, resources, and frustrated workers. The latest Kelly Global Re:work Report found that while 70% of global executives believe AI should free up employee time, less than half of global workers report actually getting that time back. Even more telling: 30% of business professional and industrial executives state workers are simply confused by the tools they've been given.
The disconnect isn't about resistance to change, it's about change without purpose. When training doesn't connect to real productivity gains or clear career paths, it feels like theater. And workers know the difference between learning something valuable and checking a box.
"Most organizations are asking the wrong question about AI in the workplace. Everyone wants to know: 'How do we implement AI tools?' But they should be asking: 'What problems are causing us the most pain right now, and how could AI help solve them?"
— Ridge Carpenter, AI Product Manager at Kelly
Enthusiastic adoption doesn't equal effective usage.
Even when adoption looks good, it doesn’t always translate to impact. Using AI more doesn’t mean using it well. In fact, a reluctant adopter might get more value from a tool in one or two targeted use cases than an eager adopter who applies it everywhere indiscriminately.
Our CEO at Kelly once asked me: “Is there an inflection point where we’ve adopted enough for it to make a real difference?” It’s a smart question. The goal isn’t infinite adoption—it’s meaningful results.
Eight out of ten business professional and industrial executives admit their organizations struggle with AI implementation, according to our recent workforce research. The biggest hurdles? Technical issues plague nearly half of global organizations, while one-third report slow adoption rates and workers who aren’t confident in the tools. Those aren’t failures of talent—they’re symptoms of adoption outpacing effective usage. Here’s the pattern I’ve observed: when a tool solves a real problem, workers snatch it up faster than you can track it. When it doesn’t, you get exactly these issues - resistance masquerading as technical problems.
The psychology behind AI resistance.
Technical barriers are only half the story. The other half is human. You’re not just measuring whether AI works—you’re also managing how people feel about it.
When leadership talks about “efficiency,” workers often hear “job cuts.” When employees say tools don’t help them, leadership sometimes hears “resistance.” In reality, both sides want the same thing: for the job to get done well.
The Re:work Report survey results highlight this perception gap. Nearly nine in ten global executives say they feel well-informed about team performance, yet many workers don’t think their work is recognized. Global executives also believe satisfaction is rising, while fewer than three in ten global workers agree. AI sits right in the middle of this disconnect.
Here’s what’s often missing: the worst habits in AI adoption reduce workers to their outputs while disregarding the thought, learning, planning, relationships, and other intangibles that drive real impact. This hyper-focus on measurable outputs at the expense of holistic “Return on Employee” is at the heart of the distrust between executives and workers.
To bridge it, leaders need to create space for honest dialogue—not just about what AI can automate, but about where these intangibles play into the business. Ask workers where AI feels overhyped. Treat negative feedback as valuable intelligence, not insubordination. Use these conversations to understand the real problems faced every day, including the human factors that don't show up in efficiency metrics. Sometimes even raising environmental costs—like the resource load behind generating certain outputs—can help bring people into the conversation as partners, not just productivity units.
AI should help you decide, not decide for you.
At Kelly, we have a framework that guides all our AI implementation: "AI helps–you decide." These are two separate statements, not one idea. AI isn't helping you decide. AI is helping, while you are deciding. The distinction matters because decision-making has to reside with a person.
This isn't just regulatory compliance (though automated decision-making, generally speaking, doesn't legally fit into employment or other consequential areas like housing and medical care). It's a more sustainable and human-centered way to approach work. At the end of the day, human decisions should drive human impacts.
Consider AI's actual strengths: searching, revising, drafting, rephrasing. It's not well-suited for deciding, long-term planning, or building relationships. There's a way of scaling up the person through AI tools that's useful, but you have to start with what that person's problems are and apply the tool where it's most valid.
For example, AI can screen resumes at a surface level—comparing them to job descriptions and flagging the most promising candidates so a person can look closer. But the final evaluation and hiring decisions should remain human. Here's why that boundary matters: too much reliance on the AI component means people start speaking to the AI in their inputs rather than to the humans they're supposed to be interacting with. This is what fuels "dead internet theory"—and it's why AI makes for poor decision-making. It incentivizes factors that aren't human at all and decides in ways that can't be reliably observed or explained in human terms.
"At Kelly, we have a framework that guides all our AI implementation: 'AI helps, you decide.' AI is supportive of human decision-making by summarizing, bringing in details, recording things, and classifying—but the decision-making has to reside with a person."
— Ridge Carpenter, AI Product Manager at Kelly
Don't let AI erode essential skills.
Another risk of over-adoption is skill erosion. Just think about directions: from asking a stranger, to printing MapQuest sheets, to following an app that tells you every turn. Each step made navigation easier—but also chipped away at our own ability to do it.
The Re:work data points to a similar concern with AI in the workplace. While companies want employees with AI skills, 56% of global executives say recent graduates lack workplace professionalism, and 35% cite deficits in writing proficiency and public speaking. AI tools can help bridge some gaps—like improving written communication—but they shouldn't replace the development of fundamental skills.
Real AI competency isn’t mastering prompts—it’s adaptability and hunger to keep learning. That’s what builds a workforce that uses tools intentionally instead of unconsciously outsourcing critical skills.
Weigh environmental costs in implementation decisions.
Not all AI use cases are created equal. Transcription might save hours with a relatively light footprint, while image generation can consume staggering amounts of resources. Sometimes it simply isn’t worth the environmental cost to shave minutes off a task.
Leaders should factor these tradeoffs into adoption decisions. In my own work, there are days I don’t touch AI at all, and days I rely on it heavily. The difference depends on whether it truly makes the outcome better.
Measuring what matters
AI’s value can’t be hand-waved—it needs to be measured. That means defining up front what success looks like: time saved, accuracy improved, costs reduced. Without benchmarks, you can’t know whether adoption is working.
Don’t forget: we're still going through the typical hype cycle with AI. There's the peak of inflated expectations, then you drop to the trough of disillusionment, but then you level back up to a plateau where the tool actually gives you returns–just not the pie-in-the-sky returns touted on every gilded shovel you might see on the shelves now. We're starting to see some leveling off from that initial enthusiasm, which isn't necessarily bad—it means we're getting more realistic about where these tools add value.
Problems first, solutions second.
For executives feeling overwhelmed by the noise: go back to product management fundamentals. Start with the problems that matter most to your business. Then look at whether AI can help.
Avoid adopting tools just to prove you’re “doing AI.” Instead, get clear on where it adds value and where it doesn’t. The only real misstep is refusing to learn how these tools work at all. With understanding, you can make smarter decisions about when AI makes your team better—and when it’s simply not worth the cost.
At the end of the day, AI in the workplace should be seen as a tool, not a worker. The organizations that succeed will be the ones that keep humans in the driver’s seat—using AI to amplify human capabilities, not replace them.
Want the full picture?
Download the complete Kelly Global Re:work Report to explore workforce trends shaping the future - based on insights from 6,000 executives and workers worldwide.
FAQs
How do you implement AI in the workplace effectively?
Start with a problem-first framework rather than a tool-first approach. Identify the specific business pain points causing the most friction—whether that's slow hiring processes, inefficient communication, or manual data entry. Once you've mapped your problems, evaluate which AI tools genuinely address those challenges. Implement in targeted use cases, measure results against baseline metrics, and scale what works. Avoid adopting AI just to prove you're "doing AI."
What are the biggest challenges when implementing AI in the workplace?
According to The Kelly Global Re:work Report, eight out of ten professional and industrial executives say their organizations struggle with AI implementation. The three biggest hurdles are technical integration issues, slow adoption rates among employees, and user confidence gaps. Additionally, many companies face a disconnect between leadership perception and employee reality—executives believe AI initiatives are widespread while nearly a quarter of employees don't see or feel AI in their daily work.
What should AI do vs. what should human do in the workplace?
AI excels at supportive tasks: searching databases, summarizing information, drafting content, revising text, transcribing audio, and classifying data. Humans should handle final decision-making, long-term strategic planning, relationship building, and any choices affecting employment or human impacts. At Kelly, the framework is "AI helps you decide"—AI brings information forward, but people make the calls. This ensures compliance, sustainability, and human-centered outcomes.
How do you measure AI implementation success?
Define success metrics before deploying any AI tool. Key measurements include time saved on specific tasks, accuracy improvements in outputs, cost reductions in workflows, and employee satisfaction with the tools. Establish baseline benchmarks, then track progress monthly. Without clear KPIs, you can't determine whether adoption creates meaningful results or just activity. The goal isn't infinite adoption—it's measurable impact on identified business problems.
Why do most AI adoption initiatives fail?
About the Author
Ridge Carpenter is an AI Product Manager at Kelly Services, specializing in the intersection of machine learning and staffing solutions, with prior experience leading computer vision product development for Amazon Halo.
Follow Ridge on LinkedIn for more insights on AI implementation in the workplace.