Generative AI: Big Bang
Updated: Sep 10
In the past few months we are witnessing an explosion of generative AI technologies, across multi media formats. True, these technologies are still in their infancy, both at the technological level and product maturity.
For instance, ChatGPT, the most vociferous attempt yet, is trained on data dated to 2021 or older, so will not provide recent data, or connection to live websites. Its monastic interface allows only text queries and responses, and a strong sense of MVP - an early trial in conveying value - is abound, although, admittedly, this seems like a widely successful one:
In the past week, apparently due to the explosive rise in demand, all you get is an "Out Of Office" kind of response, with the platform outgrowing its current capacity.
A "Pro" version is rumored to be rolled out soon, quoted at $42 per month. My guess is that this is aimed to flatten the demand curve, rather than the monetization model OpenAI is after.
Are OpenAI going after billions of users with this crude MVP. So what is their play, and how can it give rise to a new universe of AI applications?
Go To Market
With the incredible strength of ChatGPT, even at its unrefined current state, the internet throngs threw themself to experiment with the new toy in the attic.
It seems as if there is not a single corner of human intellectual activity that the technology cannot capture, and - one way or the other - bring tremendous productivity value to users.
But is it realistic to expect a single company would cater to the myriad of use cases that emerged? I argue that this Jack will not master al trades.
The most scalable and profitable approach would be to harness the power of the masses: A new industry of GPT related solutions, tailored to cater to the myriad potential use cases:
In my analysis, while OpenAI keeps focusing on the deep technology and refine its models, it is going to expose a paid-for API to solution partners, preferably through a self-service interface.
The Secret sauce of these third party vendors, commercializing the GPT chat while targeting b2b, b2c customers lies in the "I/O modifiers". These are:
Query modifiers: Replacing human prompt-engineers, they will be built to modulate the natural language queries keyed in by users, limiting and formatting it into context specific form and limit values and conditions, so that the response generated is coherent, context relevant, and precise
Output pre-processor: This part will format the result generated into the relevant front end.
Note that the system starts from the bottom up, with OpenAI responsible for the LLM and the API, and the third party solution providers, that will be the owners of the IP required to modify the conversation results within domain specific boundaries.
These vendors will expose a front end on the desired from-end: It can be a twitter-bot, a Facebook-bot, a website, a browser extension and a host of other potential embodiments.
Finally, these vendors will support the data Flow with a business layer that will enable them to manage users, payments, and ancillary functions.
Are you thinking of AI chat products yourself? Come talk to us!