Imperfection in AI starts with a False Sense of Perfection.

John P. Gormally, SR
3 min readOct 3, 2023

Artificial intelligence is rapidly becoming the needed technology to help advance society into the future. Without AI being part of our everyday lives, will medical innovation, faster diagnosis for cancer, and fewer work days become a reality?

Reality ultimately is up to people to understand how AI will play a critical role in their future. Would you become more inclined to attend a university open to ChatGPT as a learning tool? Would you accept a software engineer position with a firm heavily dependent on CoPilot to make their product delivery timelines?

AI has quickly become a critical piece of every organization’s decision tree. CFOs, COOs, and CIOs all praise the adoption to help address their cause and concerns around reducing operational costs, lowering wages, and delivering more productivity and efficiency. While these business attributes seem in line with revenue growth expectations and a rapid return to profitability, let’s consider that the imperfection in AI starts with a false sense of perfection.

Expecting AI to have a positive impact on lowering operating costs starts the ability for the front-end cost of AI to become recoverable. No AI-related project or investment has yielded an expected ROI, even on a smaller scale. CIOs and CFOs need help to keep talent, yet who would consider accepting a job offer if an organization is going completely tops-down regarding AI everywhere?

Moving to the cloud, investing in virtualization, and defining a mobility strategy all came at a higher-than-expected front-end cost with little to show as a return on the investment. We expect AI to follow the same path.

Yet, knowing this, organizations failing to adopt AI will witness a highly competitive landscape with adversarial cybersecurity tactics already incorporated by hackers globally. Finance, healthcare, government, and higher education organizations have already felt the blunt effect of WormGPT and FraudGPT. Hackers, like organizations, race to embed ChatGPT into their various systems and workflow processes and have developed their LLMs based on the attack telemetry to optimize their future attacks. Organizations must deploy AI within their cybersecurity protection layers to avoid these attacks.

Organizational recruiters looking for global talent may face even more of a challenge. Top engineers wanting to up their skill set in AI will often choose to go with a firm that will best provide them with the opportunity to increase their AI knowledge and experience. Organizations needing more investment in AI could keep top cybersecurity, data analytics, and data science talent.

AI has been and will continue to become an adversary and compatible partner to all organizations. Failing to adopt an AI strategy with a strong governance strategy will make the endeavor far more costly than expected, with no hope of a measurable return. Adopting AI will position the organization’s skills and capabilities to excel their organization forward. However, like any other destructive technology, preventing the whole adoption from moving sideways, even with guardrails, is the ultimate risk.

#bookstagrammerph #bookstagramph #bookishpinoy #instamug #stonewareph #coffeeandcurrentlyreading #coffeewithaview #coffeebooked #kapetayo #coffeenook #baguiocity #podcast #writerscommunity #booklovers #fictionwriting #goodreads #author #writing #bookclubs #fiction #writer #cycling #coffee #booklover #austinmacauley #saferoadsforlife #bookstagramersph #bookstack

--

--

John P. Gormally, SR

John P. Gormally is a fictional and non-fictional cybersecurity blogger and writer based in Lake Forest California.