Same HW as in Local install of Deepseek R1 is used. Same security concern with 3rd party models (also refer to disclaimer in LMstudio, last screenshot)
QWEN 2.5
Install LMstudio: https://lmstudio.ai/ Launch LMstudio, via discovery seek QWEN, here model qwen2.5-7b-instruct-1m was downloaded (4,6GB) and loaded in LMstudio. First query reply is good but problem with concatenated queries. .
DEEPSEEK V3:
The available HW is likely not sufficient. Eventually a small module subset may be installed when available on Ollama (small distribution size < 10GB)
Deepseek V3 Open Source or search, install and run via LMstudio: https://lmstudio.ai/ Launch LMstudio and via discovery seek Deepseek V3 Lite (when available). Refer to discovery screenshot below and pay attention to the disclaimer for security reason (ideally run in isolated instance and use for research only until a profound validation is passed). V3 Release note (architecture).
