One thing I am still unsure about is whether people prefer fully model agnostic tools or more opinionated assistants that trade flexibility for better behavior in specific scenarios. Curious what others think.
I started with those because they gave predictable latency and behavior while I was validating the UX and reasoning flow.
I’m planning an experimental mode that allows curl style requests so the assistant can talk to arbitrary providers or custom endpoints without being tied to a specific model. That should make it easier to plug in other hosted or local setups.
Longer term I’m trying to balance flexibility with keeping the default behavior sane and reliable.
I started with those because they gave predictable latency and behavior while I was validating the UX and reasoning flow.
I’m planning an experimental mode that allows curl style requests so the assistant can talk to arbitrary providers or custom endpoints without being tied to a specific model. That should make it easier to plug in other hosted or local setups.
Longer term I’m trying to balance flexibility with keeping the default behavior sane and reliable.