tl-dr
-Can someone give me step by step instructions (ELI5) on how to get access to my LLM’s on my rig from my phone?
Jan seems the easiest but I’ve tried with Ollama, librechat, etc.
…
I’ve taken steps to secure my data and now I’m going the selfhosting route. I don’t care to become a savant with the technical aspects of this stuff but even the basics are hard to grasp! I’ve been able to install a LLM provider on my rig (Ollama, Librechat, Jan, all of em) and I can successfully get models running on them. BUT what I would LOVE to do is access the LLM’s on my rig from my phone while I’m within proximity. I’ve read that I can do that via wifi or LAN or something like that but I have had absolutely no luck. Jan seems the easiest because all you have to do is something with an API key but I can’t even figure that out.
Any help?
Self hosting IS hard, don’t beat yourself too much because of it… After all you’re trying to serve services for yourself that are usually served by companies with thousands of employees.
A server requires knowledge, maintainance and time, it’s okay to feel frustrated sometimes.
Why don’t you ask your LLMs how to do it.
lol I have! They all say the same similar thing but it’s just not working for me.
How strange.
They are trolling you. They are probably radical anti AI folk.
Why do you want to set it up if your experience is bad results?
To eliminate another subscription I imagine.
Just do like me - Install Ollama and OpenWebUI, install Termux on Android, connect through Termux with port forwarding.
ssh -L 0.0.0.0:3000:ServerIP_OnLAN:3000And access OpenWebUI at http://127.0.0.1:3000/ on your phone browser. Or SSH forward the Ollama port to use the Ollama Android app. This requires you to be on the same LAN as the server. If you port forward SSH through your router, you can access it remotely through your public IP (If so, I’d recommend only allowing login through certs or have a rate limiter for SSH login attempts.
The shell command will then be
ssh -L 0.0.0.0:3000:YourPublicIP:3000But what are the chances that you run the LLM on a Linux machine and use an android to connect, like me, and not a windows machine and use an iPhone? You tell me. No specs posted…
Oh! Also, I’m using windows on my PC. And my phone is an iPhone.
I’m not using Linux yet, but that is in my todo list for the future! After I get more comfortable with some more basics of self hosting.
Oh! Also, I’m using windows on my PC. And my phone is an iPhone.
Okay, that’s a starting place. So if this is Windows, and if you only care about access on the wireless network, then I suppose that it’s probably easiest to just expose the stuff directly to other machines on the wireless network, rather than tunneling through SSH.
You said that you have ollama running on the Windows PC. I’m not familiar with LibreChat, but it has a Web-based interface? Are you wanting to access that from a web browser on the phone?
Yes exactly! I would love to keep it on my network for now. I’ve read that “exposing a port” is something I may have to do in my windows firewall options.
Yes I have Ollama on my windows rig. But im down to try out a different one if you suggest so. TBH, im not sure if librechat has a web ui. I think accessing the LLM on my phone via web browser would be easiest. But there are apps out there like Reins and Enchanted that I could take advantage of.
For right now I just want to do whatever is easiest so I can get a better understanding of what I’m doing wrong.
Yes I have Ollama on my windows rig.
TBH, im not sure if librechat has a web ui.
Okay, gotcha. I don’t know if Ollama has a native Web UI itself; if so, I haven’t used it myself. I know that it can act as a backend for various front-end chat-based applications. I do know that kobold.cpp can operate both as an LLM backend and run a limited Web UI, so at least some backends do have Web UIs built in. You said that you’ve already used Ollama successfully. Was this via some Web-based UI that you would like to use on your phone, or just some other program (LibreChat?) running natively on the Windows machine?
Backend/ front end. I see those a lot but I never got an explanation for it. In my case, the backend would be Ollama on my rig, and the front end would be me using it on my phone, whether that’s with and app or web ui. Is that correct?
I will add kobold to my list of AIs to check out in the future. Thanks!
Ollama has an app (or maybe interface is a better term for it) on windows right that I download models too. Then I can use said app to talk to the models. I believe Reins: Chat for Ollama is the app for iPhone that allows me to use my phone to chat with my models that are on the windows rig.
Backend/ front end. I see those a lot but I never got an explanation for it. In my case, the backend would be Ollama on my rig, and the front end would be me using it on my phone, whether that’s with and app or web ui. Is that correct?
For Web-based LLM setups, it’s not common to have two different software packages. One loads the LLM into video memory and executes queries on the hardware. That’s the backend. It doesn’t need to have a user interface at all. Ollama or llama.cpp (though I know that llama.cpp also has a minimal frontend) are examples of this.
Then there’s a frontend component. It runs a small Web server that displays a webpage that a Web browser can access, provides some helpful features, and can talk to various backends (e.g. ollama or llama.cpp or some of the cloud-based LLM services). Something like SillyTavern would be an example of this.
Normally the terms are used in the context of Web-based stuff; it’s common for Web services, even outside of LLM stuff, to have a “front end” and a “back end” and to have different people working on those different aspects. If Reins is a native iOS app, I guess it could technically be called a frontend.
But, okay, it sounds like probably the most-reasonable thing to do, if you like the idea of using Reins, is to run Ollama on the Windows machine, expose ollama’s port to the network, and then install Reins on iOS.
So, yeah, probably need to open a port on Windows Firewall (or Windows Defender…not sure what the correct terminology is these days, long out of date on Windows). It sounds like having said firewall active has been the default on Windows for some years. I’m pretty out-of-date on Windows, but I should be able to stumble through this.
While it’s very likely that you aren’t directly exposing your computer to the Internet — that is, nobody from the outside world can connect to an open port on your desktop — it is possible to configure consumer routers to do that. Might be called “putting a machine in the DMZ”, forwarding a port, or forwarding a range of ports. I don’t want to have you open a port on your home computer and have it inadvertently exposed to the Internet as a whole. I’d like to make sure that there’s no port forwarding to your Windows machine from the Internet.
Okay, first step. You probably have a public IP address. I don’t need or want to know that — that’d give some indication to your location. If you go somewhere like https://whatismyipaddress.com/ in a web browser from your computer, then it will show that – don’t post that here.
That IP address is most-likely handed by your ISP to your consumer broadband router.
There will then be a set of “private” IP addresses that your consumer broadband router hands out to all the devices on your WiFi network, like your Windows machine and your phone. These will very probably be
192.168.something.something, though they could also be172.something.something.somethingor10.something.something.something. It’s okay to mention those in comments here — they won’t expose any meaningful information about where you are or your setup. This may be old hat to you, or new, but I’m going to mention it in case you’re not familiar with it; I don’t know what your level of familiarity is.What you’re going to want is your “private” IP address from the Windows machine. On your Windows machine, if you hit Windows Key-R and then enter “cmd” into the resulting dialog, you should get a command-line prompt. If you type “ipconfig” there, it should have a line listing your private IPv4 address. Probably be something like that “192.168.something.something”. You’re going to want to grab that address. It may also be possible to use the name of your Windows machine to reach it from your phone, if you’ve named it — there’s a network protocol, mDNS, that may let you do that — but I don’t know whether it’s active out-of-box on Windows or not, and would rather confirm that the thing is working via IP before adding more twists to this.
Go ahead and fire up ollama, if you need to start it — I don’t know if, on Windows, it’s installed as a Windows service (once installed, always runs) or as a regular application that you need to launch, but it sounds like you’re already familiar with that bit, so I’ll let you handle that.
Back in the console window that you opened, go ahead and run
netstat -a -b -n.Will look kinda like this:
https://i.sstatic.net/mJali.jpg
That should list all of the programs listening on any ports on the computer. If ollama is up and running on that Windows machine and doing so on the port that I believe it is, then you should have a line that looks like:
TCP 0.0.0.0:11434 0.0.0.0:0 LISTENING“11434” is the port that I expect ollama to be listening on.
If the address you see before “11434” is 0.0.0.0, then it means that ollama is listening on all addresses, which means that any program that can reach it over the network can talk to it (as long as it can get past Windows Firewall). We’re good, then.
Might also be “127.0.0.1”. In that case, it’ll only be listening to connections originating from the local computer. If that’s the case, then it’ll have to be configured to use 0.0.0.0.
I’m gonna stop here until you’ve confirmed that much. If that all works, and you have ollama already listening on the “0.0.0.0” address, then next step is gonna be to check that the firewall is active on the Windows machine, punch a hole in it, and then confirm that ollama is not accessible from the Internet, as you don’t want people using your hardware to do LLM computation; I’ll try and step-by-step that.
Dope! This is exactly what I needed! I would say that this is a very “hand holding” explanation which is perfect because I’m starting with 0% knowledge in this field! And I learned so much already from this post and your comment!
So here’s where I’m at, -A backend is where all the weird c++ language stuff happens to generate a response from an AI. -a front end is a pretty app or webpage that takes that response and make it more digestible to the user. -agreed. I’ve seen in other posts that exposing a port on windows defender firewall is the easiest (and safest?) way to go for specifically what I’m looking for. I don’t think I need to forward a port as that would be for more remote access. -I went to the whatismyipaddress website. The ipv6 was identical to one of the ones I have. The ipv4 was not identical. (But I don’t think that matters moving forward.) -I did the ipconfig in the command prompt terminal to find the info and my ipv4 is 10.blahblahblah.
- I ran netstat -abn (this is what worked to display the necessary info). I’m able to see 0.0.0.0 before the 11434! I had to go into the settings in the ollama backend app to enable “expose Ollama to the network”.
I’m ready for the next steps!





