AI may be about to redefine how companies derive value from their IT systems at the most basic level, from understanding customers to improving business processes, according to a senior executive at enterprise software firm Appian.
in conversation with ITPro Malcolm Ross, senior vice president of product strategy at Appian, said AI has matured to the point where customers are seeing efficiency improvements as a result of AI adoption, but it still has much more potential for 2025 and beyond. .
To illustrate his point, Ross began the conversation by pointing to Cisco as an example of a company that in 2000 was among the most valuable in the world, but has since fallen even from the top 50.
“The most valuable companies in the world were the ones that used networks to revolutionize business models like Amazon, Google, Facebook and things like that,” Ross said.
“Suddenly it's Nvidia, which is interesting because it's an AI infrastructure provider, but the companies that are going to revolutionize business operations using AI are the ones that are likely to win in the long run.”
Expanding on this comparison, Ross suggested that AI could be as influential in the world of computing as networking was in the 2000s, explaining that its potential is closer to this transformative period in computing than to other forms of computing. automation, such as robotic process automation. (RPA).
“Imagine all the different ways you can use networking: you can create a company called Netflix, you can create Twitch, you can do all these different things and AI is much more like that than it is like an individual piece of technology.”
The biggest challenges networking faced before HTTPS There were concerns about data security, Ross noted, which are also a The main obstacle to the adoption of AI. Just as the industry has overcome the above, it is confident it can navigate concerns about AI data and, in the meantime, urges leaders to think carefully about how they can make AI work as a “core capability” for their businesses.
Ross said ITPro That in 2024, Appian will have seen customers approach developers with more specific demands about what they want to achieve and implement with AI, buoyed by confidence in small gains in AI productivity.
But he added that for developers, the real value lies in the more technical capabilities of generative AI that don't steal the spotlight. As an example, he points to the fact that at Appian's recent annual conference, an Appian AI Copilot capability was introduced to generate filler data to test applications against particular customer interest.
“What got the most applause was generating simulated data because it seems mundane, but it's like the developers know it's a pain in the ass, and now I can use a magic AI button to do it.”
This is a far cry from the big changes Ross has envisioned for AI. But with time and with right people who streamline AI strategyRoss suggests it could transform business processes from the ground up.
“In reality, many people are still in exploration mode when, in the long term, they should consider AI as an underlying capability of the core of an operating system,” he said. ITPro.
“This allows you to do a lot of different things, like build apps, build simulated data, track all kinds of patterns that you might not have been able to do before, and how users interact with digital experiences.”
“So we're right at the tip of the iceberg right now, where we're unleashing that and I would say interest will skyrocket as we move towards 2025-26 as well.
Regulation is generating concern among customers
All companies, especially those in the EU, must follow strict data protection policies and this could temper the ambition of AI. Laws like GDPR and the I HAVE to act they are already coming to a head with the launch of enterprise AI and AI developers like Meta have already done it Shelved plans to train AI systems on EU citizens.
But Ross is confident that by the end of 2024, customers will have overcome what he calls the “trust hurdle” – the gap between compliance with audits and certifications and deploying AI in sensitive environments.
“Everyone intuitively knows that AI models are a probabilistic representation of data, so when you send your data and it is trained on an AI model, there is a representation of that data in that model,” Ross said. ITPro.
“If I have a German customer who complies with GDPR laws and I use that customer's information to train an AI model To better target products and goods to them, then that German customer comes to me and says I want to exercise my right to be forgotten, how can I get them out of the AI model?
Ross points out that models cannot “unlearn” information in the same way that humans cannot forget sensitive information when prompted. If you learned something you shouldn't, he adds, the best thing you can do is promise not to tell anyone.
“So it's an inherent problem with AI models, in terms of their representational data. And I think more and more people are realizing that you need to understand that, implement these private AI architectures to manage them as part of your data set, just like you do anything else, and be very careful about what you send. to these AI models for retraining.”
Context as key to AI effectiveness
While policy and strategy may define AI success more generally, this is one area where Ross said technology can definitely step in to solve the problem. Specifically, he pointed to the larger context windows now available, which control the amount of information a user can provide to an AI model at once.
“The biggest innovation for generative AI in 2024 has been the expansion of context windows,” Ross said.
“Because context windows and generative AIs are essentially short-term memory, I can send the information about that German customer to a generative AI model with a large context window and make sure the AI model doesn't retain that information. information.
“You only retain it in the context of that transaction and then, in the short term, you simply forget that information after the transaction is completed. So it's a much better model for using predictive aspects of AI, but ensuring that it doesn't retain that information for retraining purposes.”
Context windows have grown progressively over the past two years as AI vendors seek to make LLMs more useful for companies that may have privacy concerns or are looking to input large amounts of data, such as code bases.
when he GPT-3-motorized ChatGPT First released, it only allowed 4,000 tokens (equivalent to about 3,000 words) of information in each entry. This has since been largely overcome, with GPT-4o offering a context window of 128,000 tokens, Anthropic's Claude Sonnet 3.5 a 200,000 token context window, and Google Gemini 1.5 Pro a context window of 1 to 2 million tokens for selected customers.
Ross acknowledges that while significant steps have been taken to improve the accuracy of AI results in the last year alone, Appian has seen a jump from 60% to 90% confidence in removal. AI hallucinations – A knowledgeable human being will always be necessary to meet the customer's demands for accuracy.
“[Given] the nature of AI, I'm not sure if it will ever be 100%, because there is always a probabilistic nature around the structure of the question and one could easily be asked that cannot be obtained from the data set. Therefore, there is a certain level of probabilistic response that always seems to sanitize the needs.”
In the future, Ross suggests that companies could try to get much more value from AI outside of natural language chats of the type we are currently seeing in the several AI co-pilots currently on the market.
“When I interact with a computer, the context of what I want is often conveyed in many other ways, such as where my screen is, where the mouse is moving, what information I'm looking at, the last five things.
“It's the same way human communication, like us now, is more than just our words, it's our eye contact, our facial expressions; There is more depth in the context of a communication. So a lot of that can be derived, fed into generative AI models to dynamically build things on the fly for customers or also predict things for our customers.”