top of page

Systems • Identity • Trust

Human Factors

SITH

2

AI Is an Umbrella Word (And That's the Problem)

  • Writer: Rich Greene
    Rich Greene
  • 6 days ago
  • 2 min read

Every company claims to use AI today. But what do they really mean? Sometimes it’s simple prediction. Other times it’s chatbots or just old statistics with a new label. The term AI covers a wide range of techniques that let computers perform tasks once needing human judgment. This broad use of the word creates confusion about risks, value, and what to expect. Naming the parts clearly helps cut through the noise.



What AI Really Means


AI is not a single technology but a collection of methods. These methods allow machines to handle tasks like recognizing images, understanding speech, or making decisions. The problem is marketing often bundles very different tools under the same three letters, AI. This blurs important differences.


For example, machine learning is a core part of AI. It means computers learn patterns from data instead of following fixed rules written by humans. Classic uses include spam filters that spot unwanted emails, fraud detection systems that flag suspicious transactions, and recommendation engines that suggest products or movies.


Within machine learning, there is deep learning. This uses layers of neural networks to find more complex patterns. Deep learning powers things like image recognition, speech understanding, and large language models. All deep learning is machine learning, but not all machine learning requires deep learning.


Large Language Models and Their Limits


Large language models (LLMs) like GPT predict the next word in a sentence very well. This skill can feel like reasoning because language reflects how we think. But these models do not store facts like a database. Instead, they simulate language patterns based on the data they were trained on.


This means LLMs can sometimes produce convincing but incorrect answers. That’s why grounding is important. Grounding means connecting the model’s output to real, verified information.


One way to do this is retrieval augmented generation (RAG). RAG fetches relevant documents before generating an answer, like checking a notebook before speaking. This improves accuracy and trustworthiness.


Beyond Prediction: Agentic AI


Some AI systems go further than prediction or generation. Agentic AI can plan, use tools, and take actions. For example, an agent might schedule meetings, send emails, or even move money with proper permissions.


This raises new questions: Instead of asking if AI can answer a question, we ask what it can do and what permissions it has. The potential impact depends on the system’s architecture and control mechanisms.


Cutting Through the Hype


The fastest way to understand an AI claim is to ask: Is this about prediction, generation, retrieval, action, or just branding? A fraud detection model estimating risk is very different from an agent that can execute financial transactions.


Understanding these differences helps clarify risks and benefits. It also guides decisions about cost, accuracy, and control.


AI Is a Stack Evolving at Different Speeds


The excitement around AI feels new because integration is faster today. But the fundamentals have been developing for decades. Treat AI as a stack of technologies evolving at different speeds. This view helps ask better questions:


  • What kind of AI is this?

  • What tasks can it perform?

  • How accurate and reliable is it?

  • What control and oversight are in place?


This approach leads to clearer expectations and smarter investments.



 
 
 

Comments


bottom of page