Import Models Manually
This is currently under development.
In this section, we will show you how to import a GGUF model from HuggingFace, using our latest model, Trinity, as an example.
We are fast shipping a UI to make this easier, but it's a bit manual for now. Apologies.
Import Models Using Absolute Filepath (version 0.4.7)
Starting from version 0.4.7, Jan has introduced the capability to import models using an absolute file path. It allows you to import models from any directory on your computer. Please check the import models using absolute filepath guide for more information.
Manually Importing a Downloaded Model (nightly versions and v0.4.4+)
1. Create a Model Folder
Navigate to the ~/jan/models
folder. You can find this folder by going to App Settings
> Advanced
> Open App Directory
.
- macOS
- Windows
- Linux
cd ~/jan/models
C:/Users/<your_user_name>/jan/models
cd ~/jan/models
In the models
folder, create a folder with the name of the model.
- macOS
- Windows
- Linux
mkdir trinity-v1-7b
mkdir trinity-v1-7b
mkdir trinity-v1-7b
2. Drag & Drop the Model
Drag and drop your model binary into this folder, ensuring the modelname.gguf
is the same name as the folder name, e.g. models/modelname
3. Voila
If your model doesn't show up in the Model Selector in conversations, please restart the app.
If that doesn't work, please feel free to join our Discord community for support, updates, and discussions.
Manually Importing a Downloaded Model (older versions)
1. Create a Model Folder
Navigate to the ~/jan/models
folder. You can find this folder by going to App Settings
> Advanced
> Open App Directory
.
- macOS
- Windows
- Linux
cd ~/jan/models
C:/Users/<your_user_name>/jan/models
cd ~/jan/models
In the models
folder, create a folder with the name of the model.
- macOS
- Windows
- Linux
mkdir trinity-v1-7b
mkdir trinity-v1-7b
mkdir trinity-v1-7b
2. Create a Model JSON
Jan follows a folder-based, standard model template called a model.json
to persist the model configurations on your local filesystem.
This means that you can easily reconfigure your models, export them, and share your preferences transparently.
- macOS
- Windows
- Linux
cd trinity-v1-7b
touch model.json
cd trinity-v1-7b
echo {} > model.json
cd trinity-v1-7b
touch model.json
Edit model.json
and include the following configurations:
- Ensure the
id
property matches the folder name you created. - Ensure the GGUF filename should match the
id
property exactly. - Ensure the
source.url
property is the direct binary download link ending in.gguf
. In HuggingFace, you can find the direct links in theFiles and versions
tab. - Ensure you are using the correct
prompt_template
. This is usually provided in the HuggingFace model's description page.
{
"sources": [
{
"filename": "trinity-v1.Q4_K_M.gguf",
"url": "https://huggingface.co/janhq/trinity-v1-GGUF/resolve/main/trinity-v1.Q4_K_M.gguf"
}
],
"id": "trinity-v1-7b",
"object": "model",
"name": "Trinity-v1 7B Q4",
"version": "1.0",
"description": "Trinity is an experimental model merge of GreenNodeLM & LeoScorpius using the Slerp method. Recommended for daily assistance purposes.",
"format": "gguf",
"settings": {
"ctx_len": 4096,
"prompt_template": "{system_message}\n### Instruction:\n{prompt}\n### Response:",
"llama_model_path": "trinity-v1.Q4_K_M.gguf"
},
"parameters": {
"max_tokens": 4096
},
"metadata": {
"author": "Jan",
"tags": ["7B", "Merged"],
"size": 4370000000
},
"engine": "nitro"
}
3. Download the Model
Restart Jan and navigate to the Hub. Locate your model and click the Download
button to download the model binary.
Your model is now ready to use in Jan.
Assistance and Support
If you have questions or are looking for more preconfigured GGUF models, please feel free to join our Discord community for support, updates, and discussions.