trying to install Docker to run OpenWebUI on a Ryzen 5500 with GTX 1070. Not sure why WSL won't install.
is it that my PC specs is too low? or something else?
thanks
![]() ![]() |
What Windows version, how much memory?
Please support Geekzone by subscribing, or using one of our referral links: Samsung | AliExpress | Wise | Sharesies | Hatch | GoodSync | Backblaze backup
Windows 11 Pro, 16GB
How about an error message?
This is an impossible thread to contribute to until you provide comprehensive information about your environment, errors, general setup and anything that you've changed/modified recently that may have caused problems afterwards. Can you post a useful summary please?
gehenna:
This is an impossible thread to contribute to until you provide comprehensive information about your environment, errors, general setup and anything that you've changed/modified recently that may have caused problems afterwards. Can you post a useful summary please?
You said it far more gracefully than I.
Windows 11 pro will be fine, so long as it is 22H2 or later (I use 24H2).
Software Engineer
(the practice of real science, engineering and management)
Gender Neutral
(a person who believes in equality and who does not believe in/use stereotypes. Examples such as gender, binary, nonbinary, male/female etc.)
...they/their/them...
Batman: Windows 11 Pro, 16GB
After playing around with running a bunch of AI models I think you're going to struggle getting anything really running with 16gb of ram. These are incredibly memory hungry. You should chuck in an extra 16gb.
Have a play around with LM Studio instead: https://lmstudio.ai/ - it's more user friendly and can run everything including DeepSeek and Llama totally fine. The more ram the better along with having a dedicated GPU.
I was experimenting around tonight (crap where did that time go) with a few models running on a server with 32 CPU cores and assigned 16gb of ram and even then AI models do run slow on CPU alone however you chuck in a decent GPU (the GTX 1070 may work, but could be a little too old) and performance improves greatly.
Michael Murphy | https://murfy.nz
Referral Links: Quic Broadband (use R122101E7CV7Q for free setup)
Are you happy with what you get from Geekzone? Please consider supporting us by subscribing.
Opinions are my own and not the views of my employer.
michaelmurfy:
Batman: Windows 11 Pro, 16GB
After playing around with running a bunch of AI models I think you're going to struggle getting anything really running with 16gb of ram. These are incredibly memory hungry. You should chuck in an extra 16gb.
Have a play around with LM Studio instead: https://lmstudio.ai/ - it's more user friendly and can run everything including DeepSeek and Llama totally fine. The more ram the better along with having a dedicated GPU.
I was experimenting around tonight (crap where did that time go) with a few models running on a server with 32 CPU cores and assigned 16gb of ram and even then AI models do run slow on CPU alone however you chuck in a decent GPU (the GTX 1070 may work, but could be a little too old) and performance improves greatly.
thanks, lmstudio is amazing, just one click and everything is installed
interestingly, 13900k + 4090 vs MBPro vs 5500 + 1070 is roughly the same speed for deepseek in gwen aka rather slow.
llama is a lot faster, not sure if that means anything.
have deleted docker as i don't want windows virtual Machine turned on (apparently makes games slower?)
gehenna:
This is an impossible thread to contribute to until you provide comprehensive information about your environment, errors, general setup and anything that you've changed/modified recently that may have caused problems afterwards. Can you post a useful summary please?
well it didn't have that when it said failed. after some digging around it says i need to turn on virtual machine.
i've deleted docker and now running LM Studio
![]() ![]() |