Rapid user adoption of ChatGPT creates myriad challenges

Sept. 1, 2023
How event-driven architecture (EDA) can fix the flaws with ChatGPT and unlock business value

Artificial intelligence, aka machine learning, holds the potential to revolutionize the way we live, learn, and get things done and by 2030, research suggests that AI will contribute $15.7 trillion to the global economy. In particular, ChatGPT, the popular generative AI and large language processing tool from OpenAI has taken a top spot within the technical community, offering a host of innovative applications as its capabilities continue to rapidly evolve. So, what’s next for this advanced chatbot?

From instant translations and idea generation to composing emails and essays from scratch, ChatGPT is beginning to filter into our everyday lives. According to a USB study, the chatbot reached 100 million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history.

The Untold Story: Limitations Holding ChatGPT Back

There are, however, some drawbacks and limitations that are keeping it and AI in general from achieving its full potential. This is where EDA can facilitate the flow of information between the systems that “publish” events and the other systems that indicate interest in that kind of information by “subscribing” to topics. Building applications with EDA is a perfect way to tie internal features together and make them more responsive. This means EDA absorbs requests and services them when ChatGPT is invoked, helping to improve response times, cutting down on unnecessary energy consumption, and even providing new e-commerce opportunities for B2B and B2C businesses. Here’s how.

5 Ways EDA Unlocks the Potential of ChatGPT

1)     No Questions Asked! Enable Automatic Answers by Streamlining the Request and Response Cycle

Today ChatGPT operates in what we techies call a “request/reply” way. Ask and ye shall receive, you might say. So now imagine if ChatGPT could proactively send you something it knows you’d be interested in!

For example, say you use ChatGPT to summarize and note action items from a Zoom meeting with a dozen participants. Instead of each participant raising a query, EDA would allow ChatGPT to send the notes to all attendees at the same time, including those who missed the meeting. Everyone would be automatically and instantly up to date on meeting outcomes, requiring significantly less load on ChatGPT since it proactively sends one message to a dozen recipients instead of satisfying a bunch of request/reply interactions over time, thereby improving service levels for users.

Any group activity needing the same suggestions, facilitated by ChatGPT, can benefit from this capability. For instance, teams work jointly on a codebase. Rather than ChatGPT suggesting changes/improvements to every developer in their IDE, users would have the IDE “subscribe” to suggestions and then the underlying EDA technology would be able to push it out to all subscribed developers when they launch the codebase.

2)      Reduce ChatGPT’s Energy Consumption with Better Intelligent Resource Utilization       

ChatGPT is very resource-intensive, therefore expensive, from a processing/CPU perspective, and requires special chips called graphical processing units (GPUs). And it uses quite a lot of them. The extensive GPU workload (now estimated to be upwards of 28,936) required to train the ChatGPT model and process user queries incurs significant costs, estimated to be between $0.11 to $0.36 per query.

And let’s not overlook the environmental costs of the model. The high power consumption of GPUs contributes to energy waste, with reports from data scientists estimating ChatGPT’s daily carbon footprint to be 23.04 kgCO2e, which matches other large language models such as BLOOM.

However, the report explains “the estimate of ChatGPT’s daily carbon footprint could be too high if OpenAI’s engineers have found some smart ways to handle all the requests more efficiently.” So, there is clearly room for improvement in that carbon output.

By implementing EDA, ChatGPT can make better use of its resources by only processing requests when they are received, instead of running continuously.

3)      Eliminate ChatGPT Unavailability When at Capacity          

ChatGPT needs to handle a high volume of incoming requests from users. The popularity, rapid growth, and unpredictability of ChatGPT mean it is frequently overwhelmed as it struggles to keep up with the demand that can be extremely volatile and what we call ‘bursty’. Today this leads to “Sorry can’t help you” error messages for both premium and free ChatGPT users. These recent ChatGPT outages indicate how saturated the system is becoming as it struggles to rapidly scale up to meet its ever-increasing traffic and compete with new rivals such as Google Bard. So where does EDA come in?

In the event of a ChatGPT overload, implementing EDA can buffer requests and service them asynchronously across multiple event-driven microservices as the ChatGPT service becomes available. With decoupled services, if one service fails, it does not cause the others to fail.

The event broker, a key component of event-driven architecture, is a stateful intermediary that acts as a buffer, storing events and delivering them when the service comes back online. Because of this, service instances can be quickly added to scale because it doesn’t result in downtime for the whole system — thus, availability and scalability are improved.

With EDA assistance, users of ChatGPT services across the globe can ask for what they need at any time, and ChatGPT can send them the results as soon as they are ready. This will ensure that users don’t have to re-enter their query to get a generative response, improving overall scalability and reducing response time.

4)      Integrate ChatGPT into Business Operations to Disrupt the AI E-commerce Marketplace

AI plays a critical role in the e-commerce marketplace – in fact, it is projected that the e-commerce AI market will reach $45.72 billion by 2032. So, it’s no surprise that leading e-commerce players are trying to figure out how to integrate ChatGPT into their business operations. Shopify for instance, has developed a shopping assistant with ChatGPT that is capable of recommending products to users by analyzing their search engine queries.

EDA has the potential to enhance the shopping experience even further and help B2C and B2B businesses learn more about their customers by tracking key events at high volume from e-commerce platforms to help businesses understand patterns in customer behavior, such as what items are the most profitable in certain regions and what factors influence purchasing decisions. This information can be then sent to a data store for the ChatGPT machine learning model to predict customer behavior and make personalized product recommendations. This is only the beginning of the development of these sorts of models based on ChatGPT.

5) Improve Responsiveness for your Global Userbase

Since ChatGPT and ChatGPT apps have a global user base, you would want to efficiently distribute data from your GPT queries. An event mesh is the perfect architecture to satisfy this demand.

An event mesh is an architecture layer composed of a network of event brokers that allows events from one application to be routed and received by any other application regardless of where they are deployed. Through this, you could dynamically route data on an on-demand basis to interested subscribers rather than sending your ChatGPT results to all applications and have application logic to filter it out. This results in a better user experience and saves on compute/network resources.

Unleash the Full Potential of ChatGPT with EDA

ChatGPT may still be in its infancy but with its rapid user adoption and regular new feature announcements, it seems that the story is far from over. Whether it is used to address service outages and excessive energy consumption; enable greater scalability, resilience and flexibility; or bring new business use cases to B2B and B2C organizations, EDA has the capacity to help this new generative AI tool build on its newfound success.

About the author: Thomas Kunnumpurath is the Vice President of Systems Engineering for Americas at Solace where he leads a field team across the Americas to solutions like the Solace PubSub+ Platform across a wide variety of industry verticals such as Finance, Retail, IoT and Manufacturing.

Prior to joining Solace, Thomas spent over a decade of his career leading engineering teams responsible for building out large-scale globally distributed systems for real-time trading systems and credit card systems at various banks.