LET'S GET STRATEGIC

Evaluating New Entrants to the World of Gen AI
by Linda Pophal
ChatGPT, a generative artificial intelligence (gen AI) tool, was released by OpenAI, and that tool was soon quietly followed by other entrants to the space, including Perplexity and Claude. Then DeepSeek burst onto the scene, far from quietly, in January 2025. Almost immediately, tech companies started to worry because DeepSeek supposedly represented a low-cost option, but now experts are saying that may not be the case.
But DeepSeek isn’t the only disruptive entrant in the space. Elon Musk’s Grok, which offers direct access to X, is another tool that has received a lot of buzz (and its fair share of criticism) since its release in November 2023. The adoption of gen AI tools by businesses of all types and sizes has been rapid, despite a very uncertain landscape in which questions about accuracy, privacy, the protection of intellectual property, and the potential for plagiarism remain largely unanswered. Still, companies hoping to stay relevant and competitive know they can’t just ignore these tools. But how can they make well-informed decisions about which tools to test, which to use, which to adopt, and which to avoid?
THE PLAYING FIELD
“As a researcher and author focused on human experience in the context of emerging technologies, I’ve observed that the emergence of new AI models like DeepSeek and Grok presents us with a fascinating paradox,” says Kate O’Neill, author of the 2025 book What Matters Next: A Leader’s Guide to Making Human-Friendly Tech Decisions in a World That’s Moving Too Fast. “While they represent genuine advances in capability, they also risk becoming distractions from the foundational work organizations need to do to create meaningful value,” she claims.
While gen AI tools are similar in terms of what they do—generate content—each AI model has pros and cons and strengths and weaknesses, notes Manuj Aggarwal, founder and CIO at AI and automation firm TetraNoodle. “AI is evolving fast, and DeepSeek and Grok are just the beginning. New models will continue to emerge, each claiming superiority. The real question is not about hype, but about practical impact and fit for purpose,” Aggarwal asserts.
Randall Hunt, CTO at Caylent, an Amazon Web Services consulting and engineering firm, says that “the AI landscape is still ripe for disruption.” But, while there’s plenty of buzz, Hunt feels that the sense of panic is overstated. “In fact, DeepSeek hasn’t dethroned anyone yet,” he states. Hunt’s company has tested DeepSeek-R1 and “found it missed several proprietary tests that industry-leading models all succeeded on.” He acknowledges that it’s still too early to tell what will happen with DeepSeek.
CHOOSING NEW TECHNOLOGY
“When it comes to choosing an AI model, the most important thing organizations should consider is if it actually solves a business problem,” Hunt suggests. “Don’t get distracted or buy into the hyperbole.” A broader perspective can help, according to O’Neill. “Organizations aren’t just adopting tools; they’re building frameworks for human-technology collaboration that will define their future,” she says, adding, “While DeepSeek shows impressive mathematical and coding capabilities, the real question isn’t about performance metrics—it’s about how these tools impact overall organizational effectiveness and unleash human potential.” Smart businesses, says Aggarwal, need to make choices “based on their needs, risk tolerance, and innovation strategy.” What works for one company may not work for another.
AVOIDING RISK
There can be risk involved in the use of some of these tools, notes Aaron Bailey, general manager of AI at AppDirect. “We’re not seeing enough organizations navigating new options and employees’ use of them, and it’s a huge issue,” he says. “As a result, shadow AI, or unsanctioned/unapproved AI tools being used by employees, is a huge security risk to enterprises given the variety of policies the foundational model companies have around training off user data in their free plans.” It’s important, Bailey adds, for organizations to experiment with new AI tools. IT admins “need to work with internal buyers to accelerate the process of identifying and deploying the best tools securely.”
That’s what Simon Lee, founder and CEO of Glance, a mobile app development firm, does with his team. Lee says, “Our rigorous, data-driven experiments reveal that while these new tools dazzle with speed and creative output, they sometimes falter in maintaining context, especially in complex code documentation. Our internal analytics, paired with real-world testing, show that generative AI can accelerate brainstorming and prototyping, but its nuances still demand human oversight. Decisions here aren’t swayed solely by media hype or competitor chatter; instead, we evaluate performance metrics, employee feedback, and integration ease.” He offers this example: “While DeepSeek’s algorithms boosted our initial content drafts by 25%, inconsistencies necessitated additional rounds of refinement. Consequently, we’re adapting our policies and training programs to incorporate these tools as supportive assets rather than standalone solutions.”
Balancing innovation with caution is pivotal, Lee asserts. “Harnessing AI’s power is not about blind adoption; it’s about meticulous evaluation and adaptive practices that drive sustainable innovation while mitigating risk,” he says. “This measured approach enables us to remain agile, ensuring that our competitive edge isn’t compromised by overreliance on unproven tech.”
Another risk likely to get the attention of most, if not all, content creators is the potential for your content to be devalued by search engines such as Google. “Google is increasingly targeting AI-generated, low-quality content,” warns Sandro Lonardi, CMO at SOAX, an intelligent data collection platform. “Since AI models primarily repackage existing information rather than producing truly original insights, they add limited unique value.” According to Lonardi, while DeepSeek can be a powerful marketing assistant, it has these key limitations:
- Lack of real-time intelligence—It can’t access live web data, making its insights potentially outdated.
- Limited customization—Without real, dynamic audience data, its insights and content remain broad and may not drive meaningful impact.
- No real innovation—As a gen AI tool, it reshuffles existing information rather than producing groundbreaking ideas or novel insights.
MAKING GOOD DECISIONS
It’s a continually changing field, which means that a company’s choices can be complex and confusing. It’s important to stay focused on business objectives and goals while avoiding potential risks. O’Neill says organizations should focus on the following critical dimensions when evaluating technology:
- Value integration—“Beyond comparing feature sets, leaders must evaluate how new AI capabilities amplify their unique value proposition and serve their constituents’ authentic needs.”
- Responsible innovation—“This goes beyond just risk management—although that’s crucial. It’s about proactively designing AI implementation practices that enhance human dignity and agency while delivering business results.”
- Strategic alignment—“[T]he most successful organizations are choosing AI tools that amplify their core purpose and strengthen their meaningful difference in the market.”
Smile MEDIA, LLC, a digital marketing and technology-driven content agency, has been actively evaluating emerging AI tools such as DeepSeek and Grok. CEO Sheyne Branconnier says the adoption strategy revolves around the following three core factors:
- Accuracy and bias reduction—Can the AI generate factually sound content, and does it mitigate bias better than existing models?
- Integration and workflow—Does it seamlessly integrate with our existing content and SEO tools, or does it require a complete overhaul?
- Regulatory and ethical compliance—With evolving AI regulations, how does adopting a new AI tool impact intellectual property, data security, and client trust?
“While generative AI promises increased efficiency and innovation, risks include content originality dilution, misinformation, and overreliance on automation,” Branconnier shares. “Our goal is to strike a balance, leveraging AI for data-driven insights and content structuring while preserving human expertise in brand messaging, storytelling, and strategic decision making.” The real opportunity isn’t about having the latest AI, O’Neill claims. “It’s about developing the organizational wisdom to harness technology in service of human progress. This means cultivating digital literacy, yes, but more importantly, it means strengthening our collective ability to make technology choices that enhance human meaning and create sustainable value.” |