Quantcast
Viewing all articles
Browse latest Browse all 5273

Advanced users • Re: PI5 and easy AI/CV/LLM

Are LLM's on your Pi5 useful.
I noticed some LLM seemed a bit, uh "talkative".
So I did a simple test, can LLM's help with math homework....
ollama run vicuna
>>> what is 2 + 2

2 + 2 = 4
ollama run phi
>>> what is 2 + 2
2 + 2 = 4
ollama run codellama
>>> what is 2 + 2

The answer to 2 + 2 is 4.
ollama run llama2
>>> what is 2 + 2
The answer to 2 + 2 is 4.
ollama run llava
>>>what is 2 + 2

The answer to 2 + 2 is 4.
ollama run orca-mini
>>> what is 2 + 2
The answer to the expression "2 + 2" is 4.
ollama run mistral
>>> what is 2 + 2
The answer to the expression "2 + 2" is 4. This is a basic arithmetic
problem where you are adding two numbers together. In this case, you have
the number 2 being added to itself, resulting in a total of 4.
ollama run tinyllama
>>> what is 2 + 2
2 + 2 = 4
```
Explanation:
In the input number, we have added two numbers (1 and 2). The output of
this expression will be 4.

Solution:
The input number is represented as a string by "2 + 2".
When we add these two values using the + operator, it gives us the output
number as the result.
ollama run dolphin-phi
>>> what is 2 + 2
The result of the mathematical operation "2 + 2" is 4. In basic
arithmetic, addition is one of the four basic operations that combine two
numbers to give a third number called the sum. The operation is indicated
by the "+" symbol, and it's often denoted as adding two values together.
ollama run stablelm2
>>> what is 2 + 2
The sum of two numbers, even if they are represented by different
variables or written as a single number with no units digit (e.g. "two"
instead of "2"), would be the same value regardless of how it's mentioned
or represented. Therefore, in this case:

* 2 + 2
* two + two

Both representations will give you the same result: 4.
Do you know any people that when asked simple question seem to expand the answer beyond all reason?
Surprisingly three of the smallest, dolphin-phi, tinyllama and stablelm2 are as "chatty" if not more so than the bigger mistral.
I felt like I was watching "Rain Man".

Some LLMs give answers faster and the Pi5 temp does not get up to 82C with all CPUs in 100% use.
Short answers save power and stress on the poor Pi5 and I don't need to go for a coffee while waiting.

Ollama really does make testing LLMs easier.

Next, can these LLM help me code?

Statistics: Posted by Gavinmc42 — Wed Mar 06, 2024 2:08 pm



Viewing all articles
Browse latest Browse all 5273

Trending Articles