Also follow-up to previous post.
Main interest here, learn more about ease of local install on Laptop.
1) Minimal HW check
2) Ease of installation
3) Data privacy
Pretty quick and positive experience (mandatory basis for follow-up if HW is also sufficient for training, enhancing the model).
1) HW AMD Ryzen5 4600H, 16 GB Ram, 1TB (SSD NVME) dual boot W11 and Linux (Parrot). 4 year old Laptop! Linux Parrot used.
2) Very simple installation, https://linuxblog.io/install-deepseek-linux/
a) Install Ollama: curl -fsSL https://ollama.com/install.sh | sh
was running by default afterwards else
sudo systemctl start ollama.service
and stop via
sudo systemctl stop ollama.service
b) Install the smallest 2 distilled distribution
1.5B simple text (below cmd for install and run)
$ollama run deepseek-r1:1.5B
7B some basic reasoning (below cmd for install and run)
$ollama run deepseek-r1:7B
fast, via phone data 5G hotspot 4 minutes, both
$ollama list
NAME ID SIZE
deepseek-r1:7b 0a8c26691023 4.7 GB
deepseek-r1:1.5B a42b25d8c10a 1.1 GB
Queries on both models worked (according to size), 7B worked pretty well. Beware data cut-of date was mid-end 2024 (7B query attached).
3) Data privacy, Security
According to internet comments some distributions contained malware, virus. The one installed here are (yet) clean. To double check, Wireshark was run during the full usage with no trace to external address (URL/IP/DNS/..). A real security check would require SW scans and a large series of prompts with sensitive questions (blacklisted words etc. China, US) for check on entry-prompt specific triggers (spy or monitoring).
In case this minimal HW is sufficient, further checks also with KNIME workflows are envisaged.
This initial check was in all 3 points positive and very easy, encouraging on Parrot Linux. Same should be the case for W11 and MacOS https://ollama.com/
Wireshark lists only local traffic:

Performance monitor showed good load balance of the very many LLM tasks across all CPU (seamless re-use CPU for each new task). RAM usage seems fix at 6GB which propably could be up-tuned (or may install a bigger model).

Ollama keeps the model 5 minutes in RAM (also for concatenated querries, then releases it). Status via cmd: ollama ps
16 February update to test HW for larger model deepseek-r1:14b (9 GB).

Install and usage is again very easy and fast, 9 GB RAM permanent and efficient CPU load balance as with smaller model. (14B query attached).
LMstudio (alternative) has a GUI for customization.