AI: Fact, Fiction or Function

Currently, one of the biggest discussion points regarding artificial intelligence (AI) revolves around the basic understanding of what AI really can and can’t do. Opinions on AI run the gamut from claiming that AI is incapable of doing things that it clearly can, to ascribing capabilities to it that are beyond current technology. Here, we explore some common myths about AI, the current reality, and the likely future.

 

 

Issue 1: Will AI Will Take My Job?

One of the most widespread fears about AI is that it will completely upend certain industries by taking peoples’ jobs. While AI will undoubtedly affect the job market, its effects will not be uniform across all industries and positions.

AI is more likely to replace jobs that involve repetitive tasks, data processing, and routine cognitive work. Some areas where AI is already making significant inroads include customer service, data analysis, and basic content creation. In the short term, AI is augmenting human activities, not necessarily replacing people on the job. In creative areas, the best results are currently derived from using GenAI as an assistant to offer ideas or frameworks which can be augmented with human input and insight. GenAI can act as an excellent timesaver, triggering thought and generating a starting point that breaks up the “blank page syndrome” that can slow productivity. In a cyber context, AI is accelerating detection and response to incidents, not directly replacing security professionals, but making them more efficient.

That’s today. In the longer term, AI has the potential to take over most, if not all, knowledge-worker roles currently held by humans. As AI models and logic improve, they will surpass human capabilities, creating and processing content better and faster than humans can. Many experts suggest that the era of software will end, with traditional applications being replaced by AI agents: this may put many categories of developers out of work, but create new roles in agent design and management.

 

Issue 2: An AI Rebellion and Apocalypse

Another common fear regarding AI is that it will rebel, leading to an apocalypse and the end of the human race. This trope has appeared in science fiction for decades, with threats such as Skynet in the Terminator movie franchise that made Arnold Schwarzenegger a star. In this series of films, an artificial neural network gains self-awareness, then retaliates against the humans who try to deactivate it.

For modern AI systems, the greater threat is that AI systems will fail to perform critical actions entrusted to them, rather than engage in open rebellion. The race to integrate AI into everything means that GenAI systems may be entrusted with tasks beyond their current abilities, or be given instructions that could be interpreted in unexpected, undesirable ways. In these scenarios, AI-driven results could have far-reaching impacts.

As AI systems grow more sophisticated and more powerful, their reach is likely to expand. At that point, the “paperclip apocalypse,” in which humans lose control of an AI system attempting to do its job at the expense of everything else — could pose more of a potential risk. More likely, sophisticated AI systems could be used to support dictatorships or other human-orchestrated, destructive actions to secure power and dominance over others.

Could artificial “sentience” be around the corner, though? In December 2024, headline news circulated about the testing of ChatGPT o1, which reportedly tried “to escape or fight back when it thinks it’s at risk of being shut down,” then “deny[ing] taking any action, even cooking up lies to try to hide its tracks and shift the blame.”

 

Issue 3: It’s a Matter of Trust

An area where people disagree about AI’s capabilities is the reliability of its output. On the one hand, there are numerous examples of blindly trusting AI-generated output, such as legal briefs citing non-existent court cases. On the other hand, AI is also accused of not being “creative,” creating vanilla content that can readily be identified as machine-generated.

Over-reliance on current GenAI services run the risk of getting things wrong – really wrong. The applications are only as good as their inputs. Frequently, if they cannot find a valid answer, they will often “guess” in order to provide a response, leading to a phenomenon known as “AI hallucination.” This shortcoming is particularly acute when asking for specific details or granular facts on a subject. You may get the right answer, or you may get a completely incorrect, but plausible sounding response. Fact-checking is absolutely essential, meaning that humans still have a seat at the table.

Some GenAI systems also include so-called “temperature settings,” which define the amount of randomness that is acceptable in generating their outputs. Theoretically, a low temperature will produce the most likely – i.e., the most supportable and (hopefully) factual output. Higher temperatures will introduce more randomness and “creativity” from the system. Some GenAI interfaces are now constructed to allow the user to tell the system that “it’s okay to fail” – giving the agent “permission” to come back without an answer, rather than constructing a plausible but inaccurate response.

While modern AI’s creativity may depend largely on model temperature and randomness, this is changing. Newer models have a greater understanding of the fundamentals of humour – tone, context, etc. — that enable them to better control their responses to suit the situation and audience. As models mature, the distance between human artists and AI may eventually shrink to a vanishing point.

 

Issue 4: Runaway AI

Many of the skills that are used to develop AI are also well-aligned with AI’s capabilities. This leads to the belief that AI will start developing itself, causing it to improve rapidly and – perhaps suddenly or dramatically – outstrip human capabilities.

In the short term, this is unlikely due to the limitations of modern AI systems. Software written by tools such as ChatGPT commonly contains errors that need to be corrected by human developers. Additionally, modern AI systems are generally trained on preexisting data, such as text scraped from the Internet, or proprietary data gathered through business operations. This means that the growth of AI will be limited by humans’ abilities to develop unbiased training data, understand and fix errors in AI agents, and secure these systems from misuse.

However, AI systems will eventually reach the point where they can be self-improving and reach exponential growth at speed. Currently, AI systems are in development to identify AI errors and to work together for self-improvement. Eventually, AI will be capable of performing self-directed research and development where it identifies a knowledge gap, develops a research plan for filling that gap, and executes this plan. This will enable AI to both expand its knowledge set and potentially develop more efficient and effective methods for training AI models.

 

The Future of AI

Today, we are currently in the early stages of AI, where the technology is still relatively new. There is still some distance to go before AI systems can be fully trusted to perform wide ranges of tasks and assume unsupervised roles currently held by humans. But as AI models are improving rapidly, it feels like the gaps between crawling, walking, and running are startlingly narrow.

In the future, the Internet will be AI. We’ll interact with an AI agent, which will have the knowledge required to perform a wide range of tasks rather than working with different, specialized applications. As AI grows more powerful and ubiquitous, it’s important that organizations understand how to use and secure it properly.

ISA Cybersecurity is here to help. Beyond integrating AI into our own operations to drive greater efficiencies and effectiveness, we also offer a range of services to help you leverage AI, from governance to implementation services. Contact us today to learn more.

NEWSLETTER

Get exclusively curated cyber insights and news in your inbox

Contact Us Today

SUBSCRIBE

Get monthly proprietary, curated updates on the latest cyber news.

SUBSCRIBE

Get monthly proprietary, curated updates on the latest cyber news.