Replies: 2 comments
-
|
You can use a single model by just putting the weight to 1.0. It is currently not possible to use different api_base but you can use an existing proxy or router like OptiLLM - https://github.com/codelion/optillm to router between models as needed with the same url. |
Beta Was this translation helpful? Give feedback.
-
|
Different api_base should work for each model in the config.yml, just specify it for each model. The top level setting will only be used if not specified otherwise. (cmp. the LLMModelConfig dataclass https://github.com/codelion/openevolve/blob/5d0922250a9560c4e16dd30c138013e28878cd79/openevolve/config.py#L18 and OpenAILLm that pulls it: https://github.com/codelion/openevolve/blob/5d0922250a9560c4e16dd30c138013e28878cd79/openevolve/llm/openai.py#L40 ; the shared setting for all models should not overwrite the individual settings as per the logic in https://github.com/codelion/openevolve/blob/5d0922250a9560c4e16dd30c138013e28878cd79/openevolve/config.py#L99) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
My current issue is that my graphics memory is only sufficient to deploy one model locally, but the config requires two. How should I configure it to use one locally and one online? Because I see that only one api_base can be set in the config. Or how can I use two models with different api_bases?
Beta Was this translation helpful? Give feedback.
All reactions