One of the important pieces here is making sure you are correcting LLM's on their instructions and then initial feedback. They get it wrong, often, but it's like teaching a toddler, they get it right after a few lessons. Same here, once you TRAIN the LLM on your queries and intent, they become much more adept at providing valuable feedback and guidance for what you are trying to achieve.
That training is one of the most important parts. The query used, the iteration, and WHAT you’re providing it with in those iterations are what gets you to a useable output.
One of the challenges I am having with the LLM is memory. They do seem to forget things from project to project and that leads to quite a few errors. ChatGPT in particular seems to have sketchy memory at times and you can max out your stored "chats" meaning removing context from memory. That could end up being an achilles heel for these models as support tools. And curating what to remove was painstakingly time consuming.
One of the important pieces here is making sure you are correcting LLM's on their instructions and then initial feedback. They get it wrong, often, but it's like teaching a toddler, they get it right after a few lessons. Same here, once you TRAIN the LLM on your queries and intent, they become much more adept at providing valuable feedback and guidance for what you are trying to achieve.
That training is one of the most important parts. The query used, the iteration, and WHAT you’re providing it with in those iterations are what gets you to a useable output.
One of the challenges I am having with the LLM is memory. They do seem to forget things from project to project and that leads to quite a few errors. ChatGPT in particular seems to have sketchy memory at times and you can max out your stored "chats" meaning removing context from memory. That could end up being an achilles heel for these models as support tools. And curating what to remove was painstakingly time consuming.