Closed Thread
Page 1 of 2 1 2 LastLast
Results 1 to 15 of 21

Thread: 11/01/2022 SWGEmu and Service Provider Change

  1. #1
    SWGEmu Admin Lolindir's Avatar
    Join Date
    Jul 2011
    Location
    Norway
    Posts
    12,548
    Play Stats
    Inactive

    11/01/2022 SWGEmu and Service Provider Change

    SWGEmu and Service Provider Change

    11/1/2022
    The SWGEmu Team



    Many of you know we moved our servers in 2017 to packet.net's bare metal servers. Through a gracious arrangement with packet.net, we were able to secure multi-year discounted prices that gave us the considerable resources we needed to run our services. In March 2020, Equinix acquired packet.net and continued to honor our contract. However, as they look to optimize their data center footprint, they have decided to shut down packet.net's old data center that hosts our services. They have given their customers until 30 November 2022 to shut down all servers in the old data centers.

    We are looking into solutions where we can get bare metal hosts or one of the top three public cloud providers near the same geolocation as our current servers (EWR - New Jersey). The requirements for our servers are quite high compared to basic shared hosting, while our latency and reliability requirements go beyond many “cheap server” providers. Our current systems have 256G or RAM, Intel CPUs with 48 Threads, and 960G of mirrored SSD/NVME storage combined with 10Gbs access to the internet with 4TB of outbound transfer. Our previous deal with packet.net provided all this for about $1,200.00 USD/month.

    The publicly listed cost for equivalent servers in the new Equinix data centers is about 2.7 times more expensive than our current monthly costs. Unfortunately, our current donations will not support those costs, so we need to optimize our services. To this end, we will be running an event to bring Basilisk to a final conclusion, and we will be testing other service providers to find a price/quality match that will meet our needs going forward.

    This move is complicated; most people don't know how complex our environment is. We have 35 services running, including registration, forums, archives, support portal, servers for Login, Nova, TC-Prime, Finalizer, Basilisk, DevOps: Jenkins, Gerrit, and build tools. These are all managed in a Kubernetes cluster across the worker nodes.

    While we will do our best to make the move painless for the community, it's important to remember we are a 100% volunteer org, and our team members have day jobs, family, friends, and other commitments outside the project.

    In the coming days, the EC team will announce an event to end Basilisk so we can avoid moving more than 650Gigs of disk usage, 128G of ram usage, and many Terabytes of backups. This is the first in several steps we will take to simplify our environment to optimize for cost and reduce the time needed to move our services to new hosting.

    When we are ready to cut-over Finalizer, we will post a notice on the actual date of the move both on the forums and in-game. We will move other services incrementally as we progress to the new environment, hopefully with minimal disruption.







    Thank You,

    ~SWGEmu Staff
    Last edited by Lolindir; 11-01-2022 at 05:20 PM.
    Lolindir
    SWGEmu Admin

    SWGEmu is a non-profit, open source community project.
    How to report bugs | Mantis (Bug Tracker) | Live Support
    Install SWGEmu | Fix SWGEmu | Submit a ticket

  2. #2
    Senior Member SLiFeR's Avatar
    Join Date
    Dec 2006
    Posts
    1,695
    Play Stats
    126h last 30d
    Good update, thank you.
    <JuDGE> Guild Leader | First SWGEmu Jedi Knight & Rank 11 Jedi Council Leader

  3. #3
    Banned
    Join Date
    Aug 2008
    Location
    Boston
    Posts
    5,618
    Play Stats
    Inactive
    Its tough when economic realities impact the virtual world like this.

    I'm sure many people will miss basilisk it's ability to offer the most accurate pre cu experience since the CU was patched.

  4. #4
    Junior Member Tomahawk's Avatar
    Join Date
    Feb 2012
    Posts
    100
    Play Stats
    232h last 30d
    All this extra work for you

    Thank you for all your amazing work! And Goodbye Bas !!!

    **DUTY FREE SHOP** Mos Eisley 2600 -4530
    Drop off Vendor = "Misc"
    IngameName "Chef"

  5. #5
    Junior Member neopixie's Avatar
    Join Date
    Nov 2008
    Location
    London, UK
    Posts
    105
    Play Stats
    Inactive
    3 questions..

    1) Why does the server 'need' to be in New Jersey?
    2) Why does it 'need' 10Gbit pipe? Although not game related, We're running a financial server, order processign system, webhost, B2B and global manufacturing servers from our inhouse datacenter on a 1Gbit pipe with roughly 15-20k x .5-2MB worth of data every hour and never hit above 250Mbit let alone 1Gbit... so your need for 10Gbit seems a little overkill..
    3) how on earth are you paying $1,200 a month for that? Giving the benefit of doubt on the 'need' for 10Gbit having the server outside of New Jersey and looking at other providers I was able to find a 2x AMD 7551 (32/64x2) 256GB DDR4, 2x 1tb NVMe, 2x8tb HDD for €575 ($568.96) a month..

    Not a dig on the project as you're doing great work so far. Just honest question that have now come to a good time to ask why you're paying for what you're getting and why you need what you have.

  6. #6
    SWGEmu Admin Lolindir's Avatar
    Join Date
    Jul 2011
    Location
    Norway
    Posts
    12,548
    Play Stats
    Inactive
    Quote Originally Posted by neopixie View Post
    3 questions..
    1) Why does the server 'need' to be in New Jersey?
    Because it should be about the same latency for all players AFTER the move as it is before, somewhere latency wise to this geolocation will work best, it doesn't have to be in NJ, that's where it is now. When we moved from Dallas to NJ there was no end of complaints from players who noticed changes in latency.
    Quote Originally Posted by neopixie View Post
    2) Why does it 'need' 10Gbit pipe? Although not game related, We're running a financial server, order processign system, webhost, B2B and global manufacturing servers from our inhouse datacenter on a 1Gbit pipe with roughly 15-20k x .5-2MB worth of data every hour and never hit above 250Mbit let alone 1Gbit... so your need for 10Gbit seems a little overkill..
    When we have events and 1500 people are on at once we easily exceed 1Gbit peeks, also our offsite backups take several hours even with 10Gbit and we don't want to stop doing regular full backups.
    Quote Originally Posted by neopixie View Post
    3) how on earth are you paying $1,200 a month for that? Giving the benefit of doubt on the 'need' for 10Gbit having the server outside of New Jersey and looking at other providers I was able to find a 2x AMD 7551 (32/64x2) 256GB DDR4, 2x 1tb NVMe, 2x8tb HDD for €575 ($568.96) a month..
    We have 3 servers, not just 1. The primary runs Finalizer, Login server and other latency sensitive workloads. The secondary runs TC-Prime, the Build Environment (i.e. compiling and running tests, and pushing the docker images), and it runs other services like a local docker registry, archived forums, live forums, csr tooling, jenkins, gerrit, TC-Nova, and a host of other things we can "pile on" in one box w/o latency concerns. The third server runs all our management tools, several vms with things like db's or isolated docker builders etc. We think we can optimize that one out, but it won't be simple.
    Quote Originally Posted by neopixie View Post
    Not a dig on the project as you're doing great work so far. Just honest question that have now come to a good time to ask why you're paying for what you're getting and why you need what you have.
    This project not only runs a server for players, it runs many services to support the community, CSR's, Dev's, and testing servers. Plus we do hourly snapshots, daily snapshots and we ship backups off site daily and weekly.
    Also in the past our ISP had numerous networking issues with being highly over subscribed, and when we needed support it took them literally a week to reply to tickets. With packet.net we had much better support, and almost zero network issues, and the one h/w issue we had they helped us with right away rather than pointing fingers at us.
    All this goes to say we don't want to move to some fly-by-the-night provider, or a cheap low-quality one, nor one who packs their servers so tight their buildings catch on fire.

    Just to show off some of the loads we have. Remember Mb isn't megabits, but bytes


    Lolindir
    SWGEmu Admin

    SWGEmu is a non-profit, open source community project.
    How to report bugs | Mantis (Bug Tracker) | Live Support
    Install SWGEmu | Fix SWGEmu | Submit a ticket

  7. #7
    Junior Member neopixie's Avatar
    Join Date
    Nov 2008
    Location
    London, UK
    Posts
    105
    Play Stats
    Inactive
    Great to see an actual answer to questions many have thought over the years, respect to that.

    Is a shame, specially price wise. With Lolindir being also in the EU/Nordic area I hope he can agree. Dedicated server and datacenters are cheap as chips here, specially with the likes of Ionos, 123 and Hetzner.

    Has the idea of just Buying and colocation ever come into consideration?

  8. #8
    Junior Member neopixie's Avatar
    Join Date
    Nov 2008
    Location
    London, UK
    Posts
    105
    Play Stats
    Inactive
    Also them pictures are Megabits (Mb) not Megabyte (MB)..

  9. #9
    Junior Member
    Join Date
    May 2018
    Location
    San Marino, CA
    Posts
    79
    Play Stats
    Inactive
    If you're considering the big three cloud providers, let me know if you can use any assistance in calculating and comparing costs. I work for a company with enormous footprints in all three, and have quite a bit of experience with these sorts of migrations.
    Eirra Darkwave - Swordsman/Rifleman | Veeleoda Isea - Jedi Padawan | Mitai Isea - Creature Handler <HNR>

  10. #10
    Junior Member neopixie's Avatar
    Join Date
    Nov 2008
    Location
    London, UK
    Posts
    105
    Play Stats
    Inactive
    With the shutdown of bas now pretty much confirmed...

    Can we get some insight into what the donations would now actually be funding?

    specially with the data that has been shown, the server is not even hitting close to a 1Gbit of bandwidth let alone 10Gbit.

  11. #11
    Developer
    Join Date
    Sep 2011
    Location
    New York, NY
    Posts
    1,569
    Play Stats
    Inactive
    Quote Originally Posted by neopixie View Post
    With the shutdown of bas now pretty much confirmed...

    Can we get some insight into what the donations would now actually be funding?

    specially with the data that has been shown, the server is not even hitting close to a 1Gbit of bandwidth let alone 10Gbit.
    I've been away with, well, you know, Real Life(tm), had to move, big changes at work, need to pay my personal bills, etc..

    The picture shared by Lolindir was from the ISP's network report in MB, not bits (we pay by megabytes per month), not Megabits.

    Also, internet speed is only one of the many network criteria; latency, loss, jitter, and availability are important.

    Basilisk consumes 600Gb+ of online disk and 64G of RAM, but now Finalizer actually outstrips it in RAM and CPU and is starting to build up quite an online disk footprint.

    This is part of why we're optimizing Basilisk out of the environment, it's wasting resources and supporting a handful of players.

    Also, it's yet another server we have to support, keep track of, handle crashes, and deal with support questions, and all by the 100% volunteer staff.

    As stated by Lolindir, the environment runs many services, Basilisk is just one of them. You can see the current setup here.

    I think I can remove one of the three servers by pushing all its services onto the secondary server. It'll add some lag to the forums but that's not super important, it will also slow down builds which is annoying but that's life.

    And donations not only pay for the servers but our infra costs like github org, off-site backups (90 days = 6+ Terabytes), domain name registrations, on and on. And we had a great deal for the servers and network from packet.net.

    However, those days are past, and we have to move, and no, we're not moving to a data center provider who's datacenters burn up because they pack their servers too tightly, nor are we moving to an EU-based location.

    The project is a US-based legal entity and does not have the resources to deal with laws in other countries, regulations, and other challenges such as currency or filing taxes in other countries.

    The live environment is not just a single simple server, remember we run a CI/CD pipeline (Jenkins, Gerrit, Docker), login server, Finalizer, multiple test servers (Nova, TC-Prime), global registration, bug reporting, forums, archives, CSR/Support tooling, support ticketing system, Eye of Sauron for anti-cheat analytics and alerting, primary and replica databases to support all these, monitoring and alerting. In total, there are 38 production containers and 3 VM's running on the three servers.

    Finalizer has seen peaks of 2,500 logins a day (1,725 online at a time), and at those times, we're not only burning network bandwidth, but we're also burning lots of RAM, and CPU is often right at the edge even with 24 cores and 48 HT's.

    The reality is we've gotten away with a 55%+ discount on our servers for years now, and the donations barely cover that these days.

    We will do our best to optimize the services we support to keep the project going, but over time if donations go down, we will have to shut down more services and combine others all onto a smaller footprint, and people will just have to deal with the latency, and availability issues.

    Oh, and don't forget all of us are 100% volunteers, we run all this in our spare time, and the last thing we need is more randomness injected by running this all in a closet somewhere.

    PS: This move is already costing me 20 hours a week in prep; when it's done, it'll be 160+ hours of my life gone; please be careful about waving hands and saying things are easy, they are not, and I know first hand.
    LordKator
    Developer

    lordkator@swgemu.com | www.swgemu.com
    SWGEmu is a non-profit, open source community project.
    SWGEmu FAQ | Install SWGEmu | Report Bugs

  12. #12
    Junior Member
    Join Date
    Dec 2006
    Posts
    246
    Play Stats
    Inactive
    Quote Originally Posted by lordkator View Post
    I've been away with, well, you know, Real Life(tm), had to move, big changes at work, need to pay my personal bills, etc..

    The picture shared by Lolindir was from the ISP's network report in MB, not bits (we pay by megabytes per month), not Megabits.

    Also, internet speed is only one of the many network criteria; latency, loss, jitter, and availability are important.

    Basilisk consumes 600Gb+ of online disk and 64G of RAM, but now Finalizer actually outstrips it in RAM and CPU and is starting to build up quite an online disk footprint.

    This is part of why we're optimizing Basilisk out of the environment, it's wasting resources and supporting a handful of players.

    Also, it's yet another server we have to support, keep track of, handle crashes, and deal with support questions, and all by the 100% volunteer staff.

    As stated by Lolindir, the environment runs many services, Basilisk is just one of them. You can see the current setup here.

    I think I can remove one of the three servers by pushing all its services onto the secondary server. It'll add some lag to the forums but that's not super important, it will also slow down builds which is annoying but that's life.

    And donations not only pay for the servers but our infra costs like github org, off-site backups (90 days = 6+ Terabytes), domain name registrations, on and on. And we had a great deal for the servers and network from packet.net.

    However, those days are past, and we have to move, and no, we're not moving to a data center provider who's datacenters burn up because they pack their servers too tightly, nor are we moving to an EU-based location.

    The project is a US-based legal entity and does not have the resources to deal with laws in other countries, regulations, and other challenges such as currency or filing taxes in other countries.

    The live environment is not just a single simple server, remember we run a CI/CD pipeline (Jenkins, Gerrit, Docker), login server, Finalizer, multiple test servers (Nova, TC-Prime), global registration, bug reporting, forums, archives, CSR/Support tooling, support ticketing system, Eye of Sauron for anti-cheat analytics and alerting, primary and replica databases to support all these, monitoring and alerting. In total, there are 38 production containers and 3 VM's running on the three servers.

    Finalizer has seen peaks of 2,500 logins a day (1,725 online at a time), and at those times, we're not only burning network bandwidth, but we're also burning lots of RAM, and CPU is often right at the edge even with 24 cores and 48 HT's.

    The reality is we've gotten away with a 55%+ discount on our servers for years now, and the donations barely cover that these days.

    We will do our best to optimize the services we support to keep the project going, but over time if donations go down, we will have to shut down more services and combine others all onto a smaller footprint, and people will just have to deal with the latency, and availability issues.

    Oh, and don't forget all of us are 100% volunteers, we run all this in our spare time, and the last thing we need is more randomness injected by running this all in a closet somewhere.

    PS: This move is already costing me 20 hours a week in prep; when it's done, it'll be 160+ hours of my life gone; please be careful about waving hands and saying things are easy, they are not, and I know first hand.
    Brilliant work LK. A great explanation to add to your already big workload....We appreciate you and the team massively! Hopefully the donations stay up and we dont have to downsize!! Especially with what's going on in the world financially, streamlining and getting rid of the 'non-essentials' is good project management!
    Praxi Starwalker
    Bria Server
    'RFR' - Rebel Force Recon
    'SOH' -Soldiers of Honour

  13. #13
    Junior Member neopixie's Avatar
    Join Date
    Nov 2008
    Location
    London, UK
    Posts
    105
    Play Stats
    Inactive
    Quote Originally Posted by lordkator View Post
    PS: This move is already costing me 20 hours a week in prep; when it's done, it'll be 160+ hours of my life gone; please be careful about waving hands and saying things are easy, they are not, and I know first hand.
    No one here said it was easy?

    I, among others pretty do this on the daily as a real life(tm) job and understand its can be a royal pain in the breasticales. Its more a curiousity question how you're hitting such high server usage and bandwidth (although id maybe get your ISP to display as MB and not Mb) when a certain other 'place in a galaxy far far away' is able to run 'double' the pop with a fracture of the cost.

  14. #14
    Junior Member pugguh's Avatar
    Join Date
    May 2015
    Location
    Panhandle of Hell
    Posts
    163
    Play Stats
    Inactive
    Quote Originally Posted by lordkator View Post
    I've been away with, well, you know, Real Life(tm), had to move, big changes at work, need to pay my personal bills, etc..

    The picture shared by Lolindir was from the ISP's network report in MB, not bits (we pay by megabytes per month), not Megabits.

    Also, internet speed is only one of the many network criteria; latency, loss, jitter, and availability are important.

    Basilisk consumes 600Gb+ of online disk and 64G of RAM, but now Finalizer actually outstrips it in RAM and CPU and is starting to build up quite an online disk footprint.

    This is part of why we're optimizing Basilisk out of the environment, it's wasting resources and supporting a handful of players.

    Also, it's yet another server we have to support, keep track of, handle crashes, and deal with support questions, and all by the 100% volunteer staff.

    As stated by Lolindir, the environment runs many services, Basilisk is just one of them. You can see the current setup here.

    I think I can remove one of the three servers by pushing all its services onto the secondary server. It'll add some lag to the forums but that's not super important, it will also slow down builds which is annoying but that's life.

    And donations not only pay for the servers but our infra costs like github org, off-site backups (90 days = 6+ Terabytes), domain name registrations, on and on. And we had a great deal for the servers and network from packet.net.

    However, those days are past, and we have to move, and no, we're not moving to a data center provider who's datacenters burn up because they pack their servers too tightly, nor are we moving to an EU-based location.

    The project is a US-based legal entity and does not have the resources to deal with laws in other countries, regulations, and other challenges such as currency or filing taxes in other countries.

    The live environment is not just a single simple server, remember we run a CI/CD pipeline (Jenkins, Gerrit, Docker), login server, Finalizer, multiple test servers (Nova, TC-Prime), global registration, bug reporting, forums, archives, CSR/Support tooling, support ticketing system, Eye of Sauron for anti-cheat analytics and alerting, primary and replica databases to support all these, monitoring and alerting. In total, there are 38 production containers and 3 VM's running on the three servers.

    Finalizer has seen peaks of 2,500 logins a day (1,725 online at a time), and at those times, we're not only burning network bandwidth, but we're also burning lots of RAM, and CPU is often right at the edge even with 24 cores and 48 HT's.

    The reality is we've gotten away with a 55%+ discount on our servers for years now, and the donations barely cover that these days.

    We will do our best to optimize the services we support to keep the project going, but over time if donations go down, we will have to shut down more services and combine others all onto a smaller footprint, and people will just have to deal with the latency, and availability issues.

    Oh, and don't forget all of us are 100% volunteers, we run all this in our spare time, and the last thing we need is more randomness injected by running this all in a closet somewhere.

    PS: This move is already costing me 20 hours a week in prep; when it's done, it'll be 160+ hours of my life gone; please be careful about waving hands and saying things are easy, they are not, and I know first hand.
    Thanks for the update LK.

    *Czar-Mart*drop off vendor located at: /way -4298 5462 Insane, Naboo

  15. #15
    Newbie
    Join Date
    Nov 2012
    Posts
    5
    Play Stats
    Inactive
    Thanks for the info and all the time / effort you put in this project.

Closed Thread

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

     

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts