Generative and predictive AI need one another. They’re destined to marry because each is suited to address the other’s greatest limitations. Here are two ways.
Eric Siegel
Predictive AI and generative AI are two very different animals that, it turns out, desperately need one another. Since they serve different purposes, projects that employ them are often siloed away from each other. Yet these two “flavors” of AI are destined to marry because each is suited to address the other’s greatest limitations: GenAI is expensive and often unreliable, while predictive AI is hard to use.
Addressing their limitations in this way is key for achieving the reliability AI needs in order to soften the AI bubble’s looming detonation. In a previous article, I covered five ways to hybridize generative and predictive AI so that they help one another:
A) Address genAI’s deadly reliability problem by predictively targeting human-in-the-loop intervention.
B) Address predictive AI’s complexity with a chatbot that serves as an assistant and thought partner to elucidate, clarify and suggest.
C) Use genAI to “vibe code” for predictive AI projects.
D) Use genAI to derive predictive features.
E) Use large database models, which employ methods similar to large language models to learn from structured data and make predictive AI development more turnkey.
To expand on those five, here are two more ways genAI and predictive AI work best together.
1) GenAI Explains Predictive AI’s Decisions
It’s almost self-evident that genAI can explain itself. I popped a photo of my son into chatGPT, along with the question, “Is this person happy and can you explain your answer?” and it did just fine (although a single example doesn’t prove reliability in general).
An interaction with ChatGPT.
Eric Siegel
This capability extends to the large-scale, rote decision making that predictive AI implements for highly sensitive domains such as credit decisions and medical diagnosis.
Even though such predictive AI projects tend to employ simpler models than genAI models, predictive models can still be far too complex for humans to understand in the raw. Well before the advent of transformers, large language models and genAI, the machine learning methods that underlie predictive AI were already notorious for turning out black boxes. For projects that go beyond simpler methods like decision trees and logistic regression – instead using more sophisticated methods like ensemble models and neural networks – the resulting models were, and still are, generally impenetrable, too complex to understand.
The field of explainable machine learning, aka explainable AI or XAI, emerged in order to provide humans with insight into how an ML model drives decisions. For many projects, this human understanding is critical, in order to vet models for ethical considerations, or to gain trust by stakeholders who are only comfortable greenlighting when provided with some intuition about what makes a model tick.
But explainable ML often falls short of making things entirely clear. While XAI approaches provide technical insight that’s understood by ML experts, they don’t necessarily fully bridge the “understanding gap” by delivering explanations for business-side stakeholders.
Enter genAI, which translates technical explanations into plain English (or other human languages). For example, the TalkToModel system conducts dialogues like this:
Human: Applicant 358 wants to know why they were denied a loan. Can you tell me?
TalkToModel: They were denied because of their income and credit score.
Human: What could they do to change this?
TalkToModel: Increase credit score by 30 and income by $1,000.
XAI is meant to not only explain one individual decision at a time, but to also provide insight into a model’s decision-making process in general:
Human: What types of patients is the model typically predicting [diabetes] incorrectly?
TalkToModel: For data with age greater than 30, the model typically predicts incorrectly. If BMI > 26.95 and glucose
By readily addressing the need for plain-language explanations, genAI for XAI is a vital, promising and rapidly emerging area.
2) Predictive AI Inexpensively Approximates GenAI
GenAI models are general-purpose. Rather than specializing for any one task, they offer very wide applicability. But genAI models only achieve this generality by being big – and therefore computationally expensive to use. No matter what you’re using it for, genAI carries with it this great overhead.
The remedy? Train a lighter-weight predictive AI model to emulate genAI – for one individual task at a time.
For example, a manufacturer in India needs to validate a million retail shops, in part by analyzing shop photos taken by sales personnel. Some photos are fraudulent, in part due to an incentive given to sales staff to submit a good number of store photos in order to confirm the extent of their sales efforts out on the field. Moreover, other photos are unacceptable because they fail to meet certain visual standards. If invalid photos go undetected, it costs the manufacturer by leading to wasted sales efforts, inaccurate metrics and other issues.
GenAI offers a solid approach: Simply show it a storefront photo and ask it questions that pertain to its validation – such as whether signage and products are visible, whether it’s daytime and the store shown is open and whether the store’s full exterior is shown.
But applying this approach to a million images would be prohibitively expensive. Rohit Agarwal, Chief AI Officer at the AI vendor Bizom, has shown that predictive AI addresses this problem (as he presented at the conference series I founded, Machine Learning Week 2025 – click here for his detailed slides).
Bootstrapping Predictive AI With GenAI
First, Rohit’s team used genAI to classify a sample of 10,000 images. Specifically, in order to improve performance, they employed three competing multimodel large language models to classify each image as either acceptable or not acceptable, and only counted as acceptable those unanimously labeled as such by all three models.
Second, his team used this labelled data as training and testing data to develop deep learning models (convolutional neural networks), the established standard supervised learning method for such image-classification problems.
The resulting deep learning model is both effective and practical. It approximates the performance of genAI closely enough to serve the project. And it is a much lower-footprint model, capable of classifying millions of images without incurring undue expense. By augmenting this approach with a similar, complementary process that detects duplicates – photos of the same store submitted as distinct stores – Bizom has developed a complete solution.
GenAI can’t fulfill this project’s requirements without predictive AI – and vice versa. In the end, a deep neural network is operating on its own, without genAI, but it couldn’t have gotten there without genAI to create the training data in the first place. Attaining labeled data is a bottleneck for predictive AI, so another way to think of this project is that genAI labels the training data for predictive AI.
Both of these hybrid approaches let you have your cake and eat it too: You can leverage the unique capabilities of genAI while also enjoying the enormously lighter footprint of predictive AI’s standard methods. Interestingly, both of these approaches involve creating simpler versions of models so that they’re more manageable — either more understandable or less costly to use, respectively. Yet both approaches leverage the relatively new abilities afforded by complex genAI models.
For another five ways to hybridize predictive AI and genAI, see my prior article and attend my presentation, “Seven Ways to Hybridize Predictive AI and GenAI That Deliver Business Value,” at the free online event IBM Z Day (live on November 12, 2025 and available on-demand thereafter).
If your work involves hybrid AI, consider submitting a proposal to speak at Machine Learning Week 2026.

