# Open Interpreter with Llama 2
[GitHub - KillianLucas/open-interpreter: A natural language interface for computers](https://github.com/KillianLucas/open-interpreter/?tab=readme-ov-file)
```sh
interpreter --local
```
## Using LM Studio
[LM Studio - Discover, download, and run local LLMs](https://lmstudio.ai/)
```sh
sudo add-apt-repository universe
sudo apt install libfuse2
wget https://releases.lmstudio.ai/linux/0.2.14/beta/LM_Studio-0.2.14-beta-1.AppImage
chmod u+x LM_Studio-0.2.14-beta-1.AppImage
./LM_Studio-0.2.14-beta-1.AppImage
#=> Missing X server or $DISPLAY
```
It needs GUI.
## Without LM Studio
[Open-interpreterを、自前で立てたOpenAI互換サーバで使う。llama-cpp-pythonとFastChat。|めぐチャンネル](https://note.com/ai_meg/n/nc0206327dd8f)
```sh
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install 'llama-cpp-python[server]'
```
```sh
python -m llama_cpp.server --host 192.168.0.102 --model ../models/codellama-13b.Q5_K_M.gguf --n_gpu_layers 8000
```
```sh
curl http://192:169.0.102:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{ "role": "user", "content": "Hello." }
]
}'
```
`config.yaml`
```yaml
llm.api_key: dummy
llm.api_base: http://192.168.0.102:8000/v1
```
```sh
interpreter -cf config.yaml
```