How to use Dolphin Llama 3 Ollama Mode

Ollama Program.

First Download the ollama program files from ollama.com.

https://ollama.com/

Next, we need to pull the dolhin-llama3 model.

This process start by opening two terminals, one for the server and the other for the program.
In the first terminal type ollama serve
This will make it so to base Ollama server program is running.

In the second terminal, type  ollama run dolphin-llama3

Close both terminals and the commands again to ensure the uncesored model is running.

You can type in a question that would typically be censored to see if it is working.

The program is now running from the primary hard drive and we will discuss how to transfer it to an external drive.

I will be transferring the program and model to a SanDisk 128 GB USB 3.0 flash drive

First right click the flash drive, select format and under file system, select NTFS . This will make is so we can transfer files larger than 4GB. If you are using a large hard drive you can skip this step as it will be NTFS by default. Note that if you do select the new format it will delete anything you previously had saved on the flash drive. So do this when first using a new flash drive.

Transfer the ollana files from C:\Ollama to the external hard drive or flash drive.

We also need to copy over the base ollama server program that interfaces the model.
To find where this I located you can type.
Get-Command ollama

In this case it was located here.
C:\Users\cody\AppData\Local\Programs\Ollama
We need to copy the files located within the ollama folder to the ollama folder on the external hard drive.

Now we can run the model from the external drive which could be the D:, E:, G:, H: or I drive.

Run Dolphin Llama 3 From an External Drive

Now we will run the program from the external drive.
Step 1:
To run for the external dive first open two powershell terminals.

In the first terminal we will run the server. Start by changing the directory to the external drive
by typing cd h:\ enter.

cd h:\

To change the environmental variables and set the model path this command is used.
$env:OLLAMA_MODELS = “H:\ollama\models”

Then start the server with the serve command.
ollama\ollama.exe serve

Step 2:
In the second terminal, we will run the program.

Again change the directory to the external drive
cd h:\

Then type in the run command.
ollama\ollama.exe run dolphin-llama3

The first time this runs it might take a minute to initialize.

We can now type a question that would typically be censored to see if the model is working.

It gave an appropriate response so the model is not censored.

AnythingLLM Interface.

Step 1.

The first step is to run the server within the PowerShell the same way as before.

We will set the path and then enter the serve command.
Change directory
cd h:\
$env:OLLAMA_MODELS = “H:\ollama\models”

Then enter the serve command.
ollama\ollama.exe serve

Now go to anythingLLM.com and download the program.

For the path use the external drive\anythingllm

It will take a few minutes for the program to install.

Before opening the program a .env file needs to be added within the anythingllm folder. Open a text document and save this code to the folder location as a .env file. Make sure the model path is correct and in this case, is set to the h drive.

OLLAMA_HOST=http://127.0.0.1:11434

LLM_PROVIDER=ollama

MODEL_BACKEND=dolphin-llama3

OLLAMA_MODEL_PATH=H:\Ollama\models

Now start the program.

Pick Ollama: Run LLMs locally on your own machine.

The Ollama model should be

dolphin-llama3:latest

Once the program opens you can type in a question that would typically be censored to make sure that it is working.

Leave a Comment