Our relationship to computing just got wild
This article was cross-posted on LinkedIn by Brilliant’s Barry Po.
Some time ago, I had the opportunity to meet with insurance executives contemplating the impact AI might have on their business. At one end of the table sat a leader who was bright-eyed about the opportunities to automate and streamline operations. “Imagine what our business would be like if we could conduct appraisals without ever having to be onsite. Or what it would be like to have claims processing fully automated – done in seconds,” he mused.
At the other end of the table sat another leader who was suitably less impressed. “I wouldn’t even think about changing our business until I could be absolutely sure that an AI claims process made zero errors,” he countered. To which his colleague at the other end of the table asked: “Why would you expect that? We’d never expect that of a human adjuster.”
These kinds of debates are now happening everywhere as businesses grapple with AI and its transformational impact. This specific meeting had me thinking then, as it does now, about the nature of humanity’s relationship to computing. As this discussion nicely shows, there’s an emerging realization that part of the challenge we face in navigating AI comes from some subtle, pre-conceived notions we have about computing and what we expect computers to be able to do, and how AI has taken everything we’ve known and shaken it all up.
For decades, we’ve lived by the implicit assumption that computers are reliable because, frankly, computers are incapable of making mistakes. Maybe algorithms, or people who design and implement algorithms make mistakes. Or we may even accept that such algorithms have inherent limitations – but altogether, algorithms are deterministic: without exception, utterly predictable and reliable.
(By the way – a common definition of the word algorithm is a precise set of instructions applied to transform input data into outputs. This may explain why “algorithms” and “computing” are often used so interchangeably.)
This might be among the reasons why stories of hallucinating AIs catch our attention: it seems inconceivable that an AI – a computer – should so confidently generate erroneous output.
We are on the precipice of a fundamental change in the relationship we have with computing. From a relationship where computation is the domain of the predictable and mathematically provable to one where computation is the domain of the statistical, accepting and embracing of inherent uncertainty.
Followers of computer graphics and gamers (like me) don’t have to go far to find consumers disappointed with the most recent generation of Nvidia GPUs, which show how GPU maker Nvidia has gone “all-in” on AI. Compared with their last-gen GPUs (the RTX 40 series), Nvidia’s latest GPUs (the RTX 50 series) show modest improvements in raw compute power in favor of new AI-powered capabilities like frame generation and DLSS (“Deep Learning Super Sampling”), which use AI to dramatically improve frame rates and the rendering of complex scenes.
A game that might struggle to render at 30 frames per second, with these AI technologies, might achieve rendering at 120 frames per second or more.
Some cynical consumers call this a scandal of “fake frames,” arguing that Nvidia is ripping off GPU buyers by skimping on the hardware and making it up with AI hacks that pave over a lack of actual improvements in computational power.
Here’s another point of view: with the end of Moore’s Law, expectations of exponential year-over-year improvements in computing power through sheer multiplication of transistors and silicon density have come to an end, at least under what we know about micro-scale manufacturing, begging the question of where the next big leaps in computation will come from.
Nature may suggest a way. As just one example - the human visual system could be argued to be one of the most advanced biological computers in existence today. Decades of psychology research in visual perception and cognition have shown how the brain works in tandem with the eyes to decompose the complex world we see into representations that can be processed into chains of perception and action. Instead of trying to brute-force process the visual world we experience, the human brain intelligently parses visual data into a form that’s tractable for processing.
This includes a certain degree of approximation and invention – to create visual coherence where none may exist. For example – there’s a well-documented “blind spot” in the human eye where the optic nerve exits the eye to converge with the retina. While we should see nothing where the blind spot exists, the visual system intelligently fills in that void with what the visual system thinks you “should” be seeing, even absent of actual visual sensory data.
Or if you’ve ever experienced a visual illusion, you’re experiencing an ambiguity or artifact of that visual processing at work, as the visual system attempts to reconcile what it’s processing to maintain a coherent visual experience.
In short, just like the human visual system does, Nvidia has decided an avenue from fruitful gains in graphics rendering won’t come from pure improvements in hardware alone. Such gains will come from the use of machine learning, to enable frames to be statistically evaluated, predicted, and interpolated at a fraction of the compute power that would otherwise be needed. The visual cost might be an information-theoretic lossy image (in other words: an image that isn’t exactly correct), but one with a net overall better user experience.
Last September, Nvidia’s CEO Jensen Huang and Salesforce CEO Mark Benioff took to the stage to talk about how AI adoption will be more like onboarding employees than writing software. Just as employee onboarding and performance management are crucial topics in the talent plans of any high performing organization, the emerging landscape of agentic AI lends further support for the idea that our relationship to computing is undergoing important changes.
Instead of designing and proving the correctness of algorithms, AI will be “taught” by being provided with objectives, ground rules, and examples of what good looks like. And, outside of perhaps some ultra high-stake scenarios, performance won’t be measured in cold, error-free absolutism. Performance may instead be managed – how performant might an AI be in a given role? How will such performance be evaluated and improved? How do we give constructive feedback? And what processes will exist when exceptions or the unexpected arise?
Recognition that the performance of any AI is not guaranteed ultimately means moving from looking at computing as a source of unflappable infallibility to a place where we expect computers to make mistakes.
Such thinking may also be at the heart of the next round of major AI advances. In addition to the rampant hunger for compute and energy, AI experts are now talking about running out of data to train AI models on, a consequence of the brute force nature of machine learning today, even with modern Transformer architectures and the increasing reliance on artificially-generated data to make up for gaps in available human-created content. These suggest a push toward being able to become more efficient in learning by extracting knowledge from incomplete, perhaps sparse, data sets to get to viable performance. A more statistical approach, if you will.
Our relationship to computing is in the midst of undergoing its own transformation – one where computers are more than just deterministic machines. They could very much be beings in their own right – making sense of noisy data, making extrapolations beyond what the data alone may say, to enable new behaviours we might not be able to predict. And maybe in doing so, elevate how we think about applying computation to solve hard problems. It’s going to be wild.