Mismatch between results of virtual agent in test view and when deployed via campaign
Hello,
My company has signed up to Zoom Contact and Zoom Phone Enterprise. I have a good working knowledge of AI voice agents having worked on POC agents in Ragflow, N8N and Elevenlabs. It's early days so I am happy to park this to one side as we are working with a professional partner.
What I am struggling with is the change in behaviour between querying a knowledge base, querying a virtual agent with a minimal prompt and querying a deployed virtual agent in full screen url campaign deployment. Given I have spent much of the last year, preparing, cleaning and building RAG agent workflows for text and speech, I am trying to figure out why the same knowledgebase would retrieve different matched content from the same knowledge base using a vanilla prompt.
Are there video tutorials and documentation that go through end to end example projects to give blueprints and examples for start to finish workflows, as is often the case with N8N youtubers?
