Do we need AI-APIs?

Recently, I’ve had an idea while trying to integrate various AI services like Copilot Studio and Notion’s AI into our business routines:

We live in an era where new services constantly emerge, each vying for customers’ attention by offering better and more promising ways to add value.

However, the decision to implement an AI solution that relies on more than just pre-trained data poses a significant question: Why should I commit to using one AI tool when a better option might appear in just a week? For instance, what if I start implementing RASA and training it, only to discover something more suited to my needs soon after?

This led me to the realization that we might need a sort of business-to-AI API. I envision a service where you can input all your business-relevant data, which then generates synthetic, anonymous training data for AI models. This service would continually evolve and adjust the data to better match your business’s needs, even as it shifts direction or adapts to market changes.

We need to be able to switch more frequently to be able to keep up with the development, or we are left behind.

I’m very curious to see your thoughts on this! Please share your insights / guesses / experiences good or bad!

[This post has been refined by AI, with its original thoughts and ideas provided by me]

2 Likes

I use a drop-down menu to choose which api I want to use with the option to add or remove API_KEY’s. I try to not hardcode the API_KEY’s for the reasons you mentioned above.

I also have 2 dialogue boxes so I can compare the responses from different AI.

1 Like

Your insight is indeed a significant advancement!

When I mentioned APIs to AIs, I wasn’t just talking about large language models (LLMs) but it seems my message wasn’t clear on that. Beyond the pre-trained generative AIs that operate using our existing datasets, my interest lies in AIs that are capable of training themselves, where the provision of training data becomes crucial. My theory revolves around the idea of creating a universally applicable dataset, continually updated, allowing us to connect AIs directly to this “data pot” for their training needs.

Your point in the previous post was highly relevant, especially as a current best practice when the tasks performed by the AI remain relatively consistent, such as upgrading from GPT-4 to Claude 3.

[This post has been refined by AI, with its original thoughts and ideas provided by me]

Are there any standards / consistency / conventions emerging that would allow pipelining of multiple models and many stages of processing?

1 Like

I am not sure, of course there’s autogen1 and 2, and other attempts that work in similar ways. If it comes down to just building a pipeline, that’s not the issue, as you probably know. I have not seen any standards or best practices to this yet, then again, I can’t possibly be up to date to everything happening. I would be very interested if anyone knows more about this!

Companies like Synthesis AI are focusing on pushing synthetic data for better models, but I couldn’t find any specific information regarding this.

See I’m interested in smaller businesses. They can’t pay for special treatment, they need standards. An affordable service, that will provide the data based on rented or their own computations. Maybe it’s to early for this to be affordable, thus there’s no real need for standards yet?

[No AI touched this, because it keeps messing up the contents hehe]