For this weeks blog, I’d like to touch the topic of on-premise Jamf Pro installations, and to be more specific, some consideration to make when making your on-prem server reachable outside your network.
First of all: the thoughts and statements in this article are my own. Please feel free to comment, correct and make suggestions, but just remember to refer to docs.jamf.com (and other Jamf KB’s, white papers and tech articles) for official guidance on supported installations of Jamf Pro.
That said, more and more people are choosing for Jamf Cloud over on-premise Jamf Pro installations, and this for multiple reasons. With Jamf Cloud you don’t need to manage your own server, keep it up to date, make it secure, etc… which frees up a lot of time you can use for other things, like managing your devices instead of managing servers or just use the time to enjoy a cold beer, nice cup of coffee or whatever you fancy doing instead of maintaining servers.
But some environments are not ready to move to cloud services (yet), because their type of business doesn’t allow it, or whatever other valid reason. Hosting an on-premise Jamf Pro server might sometimes be the only option. That’s fine, but hosting your own server comes with big responsibilities (which would otherwise be taken care of by Jamf when using Jamf Cloud), and apart from organising the required ressources, keeping your servers up and running, and investing time in maintenance, there are multiple network and security considerations to make.
I’m not going to dive into all the requirements for the Jamf Pro server, as those can easily be found on: Jamf Pro System Requirements
Instead, I’d like to touch one specific part of the on-premise setup: how to allow your devices to communicate with your internal Jamf Pro server, when they are outside your internal network, roaming the beautiful but sometimes hostile internet?
Allowing your devices, both macOS as iOS, to communicate with your Jamf Pro server, when they are inside your internal network, is most likely going to be a straight forward exercise. By default, devices use port 8443 to communicate with Jamf Pro, and apart from allowing some communication outbound from your network and inbound to Apple, there is not that much work to do. See: Network ports used by Jamf Pro
But what if your devices do leave your internal network? Both intentionally (used by roaming users) or when a device get’s stolen for instance? You will for sure want to keep them under control of your Jamf Pro server, wherever they are.
This means that, one way or another, the devices which are freely roaming the outside world must be able to communicate with your Jamf Pro server (inbound to the server). Let’s have a look at the diagram below:
While the devices as well as the Jamf Pro server need to be able to contact the Apple network (220.127.116.11/8) over specific ports (5223,443,2195,2196), I’ll only focus on the inbound connection into the Jamf Pro server over port 8443 here (Custom installations, or more advanced setups might use port 443).
To make this possible, there are a few options:
First of all, my favourite one: waive the entire exercise of making your Jamf Pro server reachable from the internet off the table, and reconsider installing the server on-premise. Go Jamf Cloud! Let Jamf handle all those server concerns and enjoy your free time! They do have a solid team of Cloud geniuses who are paid to do this kind of magic on a daily basis 🙂
But what if you’re still sticking to the plan to install on-premise? No problem, there are 3 solutions to make this work.
The first option is to open port 8443 on the firewall and forward it to the Jamf Pro server. Straight forward and easy to do, but maybe not the most secure one. Apart from other security considerations, this would also expose the Jamf Pro Admin Web Portal to the internet. Personally, not my preferred way of doing things (with the exception of Jamf Cloud obviously, as Jamf takes care of all the security aspects for you!).
The second option, and maybe the most common way used to achieve this, is by installing Jamf Pro on a second server hosted in the DMZ of the network. The idea here is that (on this second server) we only install the Jamf Pro server (on top of the OS: Java, Tomcat, Jamf Pro) and allow inbound communications into this server, from the internet, over port 8443 (default). We cluster this server with the internal Jamf Pro server and connect it to the same internal mySQL server (on the master internal Jamf Pro server, or, optionally, on yet another server). This allow you to close down the web portal of Jamf Pro on this DMZ server, and limit the connection to only the managed devices, reaching out to Jamf Pro for management tasks only: Limited Access.
Setting up NAT for this external (DMZ) server, allowing inbound communication on 8443 and configure a split-DNS setup shouldn’t be that much of a hassle, but there are some considerations we have to take into account here.
First of all, we need an additional server, which, apart from making the (virtual) machine available (costs, availability of resources, etc…) also implies that we are adding additional workload to our maintenance/updating workflow.
On top of that, we still need to make a small hole in the firewall to allow this server to communicate with the internal mySQL server on port 3306 (and even the Active Directory, or LDAP server, if applicable). Only punching a small hole, but still… remember that this communication is not encrypted by default, and sends the password in clear text when communicating with the mySQL server.
NOTE: Before we start panicking about this, here is my take on it. Yes, it’s a security consideration to make and to be aware of, but if, as a sys admin, we are really worried about someone able to hang around unnoticed in our DMZ zone, sniffing packages and hacking our servers, I bet there are bigger problems to be worried about… Compare it to this: let’s think about securing the building of the bank, and the door of the vault room first, before worrying about someone who might be sitting unnoticed in front of the vault, waiting for someone to open the vault and yell the vault key out loud right? Just saying… if this is the biggest concern, there are probably other security issues you might want to fix first.
Also, yes it’s technically possible to encrypt the communication between Jamf and the mySQL server. I’m not going to elaborate on how to do this, as it’s not an officially Jamf supported configuration, and it’s beyond the goal of this post. The purpose of this post is just to share the overal thoughts and considerations you should make before committing to a specific setup, or the choice for an on-premise Jamf Pro installation over Jamf Cloud.
There is however, like with most of the Jamf (or general Mac management) related questions, already some information about this on Jamf nation:
Feature request, Discussion, …
Last but not least, according to the Jamf recommendations for clustered setups, you should also enable Memcached (which will become mandatory in future versions of Jamf Pro). This doesn’t only require additional ressources but also triggers additional network and security considerations.
See Jamf Pro Memcached.
Even when looking at all the above considerations, while you might be considering Jamf Cloud by now 🙂, this is still an acceptable setup for many organisations if all considerations have been evaluated.
But there is, however, another way of doing things! And this brings us to Reverse Proxy.
A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server, compared to a normal proxy, typically used to manage access from clients to multiple servers in the opposite direction.
The thing is, instead of deploying a second Jamf Pro (DMZ) server to securely provide access to Jamf Pro from outside the internal network, a reverse proxy might offer an alternative way of achieving this.
Note: please be aware that the following setup is not the default or officially supported way of installing Jamf Pro and should not be blindly attempted in a production environment.
Let’s start with a diagram.
The idea here is to configure a reverse proxy which will handle all incoming requests from the internet inbound to the Jamf Pro server, which gives us multiple benefits:
- First of all, we most likely don’t need an additional server, as an existing load balancer, or any reverse proxy capabilities, might already be in place.
- We can keep our entire Jamf infrastructure internal. Not only the Jamf server(s) as such, but also the communication with the mySQL server. All nicely protected behind the firewall, encrypted or not.
- The only port you’ll need to open is the one which Jamf uses for SSL (HTTPS) communication. This compared to the setup with a second Jamf Pro server in DMZ, where we have to punch a hole for both mySQL and LDAP.
- Finally we have scalability. As the entire Jamf infrastructure is nicely behind the firewall, on the internal network, clustering for load balancing and performance should be a much easier task to perform compared to when we have a Jamf Pro server living in DMZ.
- Also the requirement for memcached should be easier to implement when all servers live on the same internal network.
Documenting, or discussing, in depth configurations of reverse proxies would bring me a bit to far off topic here, and is actually hardly possible looking at all the possible hardware and network configurations available to achieve this. (I will most likely spend another blogpost on discussing my homelab test setup on this in one of my future posts)
There is however one additional question left, which I’d like to highlight: how do we close access to the web portal from outside the internal network, when using a reverse proxy setup.
When going for a clustered setup, with a Jamf Pro server in DMZ, we had the benefit of the built-in functionality to close down the web portal on the public facing (DMZ) server, to limit communication to the managed devices only. This allows the devices to contact the server for management tasks, but the web portal is not reachable from the browser. Using strong admin passwords should provide enough security to avoid unauthorised access to your admin portal, but still, limited access provides an extra level of protection.
By not putting an extra Jamf Pro server in the DMZ, we loose this built-in functionality (with only one Jamf Pro server we can obviously not shut down the web portal), so let’s briefly have a look at the possibilities to achieve a similar result.
The first option would be to limit the access to the admin console on a Tomcat level, by tweaking the web.xml file or applying firewall rules. Have a look at this Jamf Nation link as a starting point if this is something you’re interested in.
The second option involves clustering the internal Jamf Pro infrastructure. By adding an additional server, you can enable limited access to the Jamf Pro server the Reverse Proxy is redirecting the managed devices to, and use the additional server for the admin console.
While this does add the requirement of an additional server again, you could gain a small performance benefit if you select this additional server as the master. This by offloading some tasks from the client facing web applications. Have a look at the Master vs Child Web App Responsibilities in a clustered Jamf Pro setup.
That’s all folks! The goal of this post was to list the considerations you have to make when choosing between Jamf Cloud and an on-prem installation, and to highlight the possible options to make an on-prem server reachable from the Internet.
I’d love to see any comments, questions or remarks you might have in the comments below!