• 0 Posts
  • 5 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • Technically, it might be faster, but that’s not usually the reason. Email servers generally have to do a lot of work to confirm email messages are not spam. That work usually takes significantly longer than any potential DNS savings. In fact, that spam checking is probably the reason you see the secondary domains used.

    When the main domain used for many purposes (like servers, users, printers, vendor communications, accounting communications, and so forth) It leaves a lot of room for misuse. Many pre-ransomware viruses would just send out thousands of emails iper hour. The mass communicating server could also reduce the domain reputation. There are just so many ways to tarnish the reputation of your email server or your email domain.

    Many spam analysis systems group the subdomains and domain together. The subdomains contribute to the domain score and the domain score contributes to the subdomain score. To send a lot of emails successfully, you need both your servers and domains to have a very strong and very good reputation. Any marks on that reputation might prevent emails from being received by users. When large numbers of emails need to be controlled, it can be hard to get everyone in the organization to adhere to email rules (especially when the the problems aren’t users, but viruses/hackers) and easy to just register a new domain, more strictly controlled domain.

    Some of the recent changes in email policies/tech might change the game, but old habits die hard. Separate domains can still generally be more successfully delivered, have potential security benefits, and can often work around IT or policy restrictions. They might phase out, but they might not. The benefit usually outweighs the slight disadvantage that 99% of people won’t see.

    tl;dr

    Better controlled email reputation.


  • Time isn’t the only factor for adoption. Between the adoption of IPv4 and IPv6, the networking stack shifted away from network companies like Novell to the OSes like Windows, which delayed IPv6 support until Vista.

    When IPv4 was adopted, the networking industry was a competitive space. When IPv6 came around, it was becoming stagnant, much like Internet Explorer. It wasn’t until Windows Vista that IPv6 became an option, Windows 7 for professionals to consider it, and another few years later for it to actually deployable in a secure manner (and that’s still questionable).

    Most IT support and developers can even play with IPv6 during the early 2000s because our operating systems and network stacks didn’t support it. Meanwhile, there was a boom of Internet connected devices that only supported IPv4. There are a few other things that affected adoption, but it really was a pretty bad time for IPv6 migration. It’s a little better now, but “better” still isn’t very good.


  • It seems you are mixing the concepts of voting systems and candidate selection. FPP nor FPTP should not sound scary. As a voting systems, FPP works well enough more often than many want to admit. The name just describes it in more detail: First Preference Plurality.

    Every voting system is as bottom-up or top-down as the candidate selection process. The voting system itself doesn’t really affect whether it is top down or bottom up. Requiring approval/voting from the current rulers would be top-down. Only requiring ten signatures on a community petition is more bottom up.

    The voting systems don’t care about the candidate selection process. Some require precordination for a “party”, but that could also be a party of 1. A party of 1 might not be able to get as much representation as one with more people: but that’s also the case for every voting system that selects the same number of candidates.

    Voting systems don’t even need to be used for representation systems. If a group of friends are voting on where to eat, one problem might be selecting the places to vote on, but that’s before the vote. With the vote, FPP might have 70% prefer pizza over Indian food, but the Indian food vote might still win because the pizza voters had another first choice. Having more candidates often leads to minority rule/choice, and that’s not very good for food choice nor community representation.



  • I’m still rocking a Galaxy Watch 4: one of the first Samsung watches with WearOS. It has a true always-on screen, and most should. The always-on was essential to me. I generally notice within 60 minutes if an update or some “feature” tries to turn it off. Unfortunately, that’s the only thing off about your comment.

    It’s a pretty rough experience. The battery is hit or miss. At good times, I could get 3 days. Keeping it locked, (like after charging) used to kill it within 60 minute (thankfully, fixed after a year). Bad updates can kill the battery life, even when new: from 3 days life to 10 hours, then to 3 days again. Now, after almost 3 years, it’s probably about 30 hours, rather than 3 days.

    In general, the battery life with always-on display should last more than 24 hours. That’d be pretty acceptable for a smartwatch, but is it a smartwatch?

    It can’t play music on its own without overheating. It can’t hold a phone call on its own without overheating. App support is limited, but the processor seems to struggle most of the time. Actually smart features seem rare, especially for something that needs consistent charging.

    Most would be better off with a Pebble or less “smart” watch: better water resistance, better battery, longer support, 90% of the usable features, and other features to help make up for any differences.