2024-05-13 - llm response validation and feedback¶
Problem¶
Historically we we running into parsing and sql errors when trying to use the response from (any of) the LLM when asking it for suggested trust rules to validate data….
Solution¶
By strictly enforcing a json response with no formatting or markup we get a cleaner initial response, more suitable for parsing and confirming its valid.
We also now use a 3 parse pattern - if the first response does not produce a valid sql statement we pass the error (typically a data type and or casting error) back to the llm to try again, and again if necessary.
The stricter prompt also ensures we dont get back code or markup in the contextual response as we have no control over displaying markup languages into html safely.
Leverage the Magic¶
We leverage a stricter initial prompt and error checking of the response in our /api layer before we send the safe payload back to the frontend end for display.
Last Refreshed¶
Doc Refreshed: 2024-05-20