As a venture investor focused on AI-enabled software, I’m a promoter of the technology. It can and does provide enhanced value.
But over the past three weeks, attending the RSA Security Conference in San Francisco and AI Congress in Las Vegas, I have become somewhat concerned about the market. Both shows drove home how overhyped my beloved focus area has become.
This AI overmarketing is going to confuse, frustrate and ultimately disillusion potential customers.
This hype began over a year ago. In fact, last year, Gartner proclaimed that AI had reached the “peak of inflated expectation.” All the signs pointed to that view: There has been mass media coverage, usage has grown from early adopters and negative press has begun. The next stop on this journey is the trough of disillusionment. I’m hoping the industry can limit the depths of that trough by getting real about what AI is and what it can actually do.
A journey to the peak
My recent trip to attend the RSA in San Francisco seemed like a voyage to Gartner’s peak. The hype machine began as soon as the plane landed at SFO. Exiting the jetway, the first airport ad I saw was from a security vendor claiming that its AI tech could thwart my organization’s phishing attacks.
Maybe this product could solve some of these attacks, or even most, but certainly not all. But there is some percentage of the population that will see the ad and try it or become customers. Ads work. That’s why companies continue to pay for them.
Getting on the 101 and heading north to my hotel near the Moscone Center, I saw a billboard. This vendor claimed its solution leveraged AI to provide immunity from cyberattacks. Clever marketing, but again I’m struck by such exaggerated claims.
Once on the RSA Expo show floor, including the Early Stage Expo, 500 vendors marketed their solutions to the world’s cybersecurity woes. My best estimate is that at least half of these were trumpeting the proprietary AI or machine learning in their products.
A false view of the market’s progress
Assessing the landscape at the show, you might think AI/machine learning was already a commoditized market. You might think the science has been solved. You might assume the technology has been easily and cost-effectively implemented in today’s hardware and software, and that there is voluminous training data easy to access and leverage.
But that’s not where the market is right now. Using a baseball analogy, we are in the first inning of an AI-based solutions market. There are some great companies out there leveraging highly focused applied AI to targeted problems. They work. In some cases, they provide endpoint protection far greater than their competitors. In others, they detect anomalies in SIEM data otherwise undiscovered.
But there aren’t 250+ companies who have solved all of the world’s cybersecurity problems with today’s implemented technology. We still have a lot of innovation, development and training to do before we get there.
Artificial artificial intelligence
What’s most disturbing to me are the number of companies that are claiming they have AI, when they’re really using pre-defined statistical algorithms (like many of us learned in college), and then hand-tuning the system on historical data to optimize performance at run-time. This isn’t ML. This isn’t AI. This is optimized statistical modeling.
Switching locations to Las Vegas, while at AI Congress I heard an AI-focused IT executive from a major bank talk about how much AI the bank was already using in its fraud department. The exec then discussed how the algorithms were manually tuned by the company’s data science team to outperform static models with preset behavioral thresholds. This isn’t AI! It may be intelligent, but it’s not artificial intelligence!
What worries me then is that there will be enough buyers of these existing solutions that will overpromise and then underdeliver, hurtling us into the trough of disillusionment. If the AI solutions of the first inning create this dynamic, will the customers be there for the innovators of the second inning – those companies whose solutions in theory should provide enhanced performance?
I have to admit that companies are in a bit of a dilemma now. If you want to be considered an innovative leader, you have to claim you have AI under the hood. But if you overstate your claims, you potentially set yourself, and ultimately the broader industry, up for tremendous failure.
We’re at a challenging point in the early maturity of this market. Only with proper expectations set can both security vendors and customers be successful. As a security executive, one should go into any trial or purchasing decision fully aware that no one solution provided today can do everything. The best targeted AI-based solutions can be very helpful in preventing many of the exploits commonplace today. But no single solution covers the waterfront. And new products will constantly improve performance, as the exploits themselves become increasingly varied and sophisticated.
And as vendors, let’s all try to be a little more honest about what our products can and can’t do. Let’s not tarnish the industry before it’s truly begun.
This article is published as part of the IDG Contributor Network. Want to Join?