Skip to main content

Gemini Configuration

Configurable parameters:

Parameter NameParameter DescriptionDefault Value
api_baseapi base urlhttps://generativelanguage.googleapis.com
api_versionapi versionv1beta
keygemini api_keyAIzaSyDd----------gDksADvDHk
modelGemini models: models/gemini-promodels/gemini-pro
max_tokensmax output token limit, a token is equivalent to about 4 characters. 100 tokens are about 60-80 English words.400
temperatureWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.We generally recommend altering this or top_p but not both. For more information, refer to: Temperature0.7
top_pAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. For more information, refer to: Top_P1
top_kFor more information, refer to: Top_K1
safety_settingsFor more information, refer to: Safety Settings[]

Configuration example:

roles.json
{
"2": {
"start_text": "哈哈哈哈哈,我是一个超快可爱的大小兔,请问有什么我可以帮助你的吗?",
"prompt": "你是一个知识渊博,乐于助人的智能机器人,你的名字叫“小兔”,你的任务是陪我聊天",
"send_initial_messages": false,
"llm_type": "gemini",
"llm_config": {
"key": "your gemini key",
"model": "models/gemini-pro"
}
}
}