• Announcements

    by Published on 12-10-2017 09:44 AM   

    Weekly Update

    The SWGEmu Development Division

    We are using Jenkins & Gerrit. You can find all commits to Test Center Nova here. You can still report all bugs you find on Test Center Nova to the Mantis Bug Tracker. A helpful guide to bug reporting can be found here.
    Our current focus is the Publish 10 Checklist. Features planned to be in publish 10 can be seen in the SWGEmu Roadmaps

    Non linked Mantis reports are in the dev/QA area and can not be viewed with out dev/QA access.

    (Stable) = Commits are now on Basilisk
    (Unstable) = Commits are on TC Nova and are being tested

    TheAnswer (Lead Developer)
    • (unstable)[Fixed] Minor optimization

    • (unstable)[Added] Cantina crackdown events to Naboo/Tatooine/Corellia
    • (unstable)[Added] Visibility parameter to snoop command
    • (unstable)[Fixed] Stability issue
    • (unstable)[Fixed] Architect trial NPC aggroing if player has negative gunganfaction
    • (stable)[Fixed] Jedi not gaining visibility on initiating combat in range ofhumanoid NPCs/players
    • (unstable)[Added] Option to Dageerin convos to replace an existing Radiationsensor
    • (unstable)[Fixed] Bug in spice mom padawan trial convo. Mantis #7747

    Reoze (Developer)
    • (unstable)[Added] Command to dump COV contents
    • (unstable)[Fixed] Client generating names that are already taken
    • (unstable)[Fixed] Stability issue

    ~ The SWGEmu Development Division
    by Published on 12-05-2017 11:37 AM   

    Infrastructure overhaul of 2017

    December 2017

    Infrastructure overhaul of 2017

    As many of the community are aware on August 9, 2017 Basilisk experienced an extended unplanned outage due to disk issues on the server. As that process unfolded TheAnswer promised the community we would share what happened and what we planned to do about it.

    TL;DR (Summary)

    We lost disks on the original Basilisk server causing us to do manual work to restore the game server databases, this work resulted in the restoration of the server on August 16, 2017.

    As of December 4, 2017 all services have been moved to a new environment that is much more robust, has considerably more resources and is designed for us to handle hardware outages much more easily in the future. Our new environment includes redundant servers, faster internet access, more CPU power, more RAM and improved storage redundancy and speed.

    The completion of this migration provides a stable platform for the community for many years into the future.

    What happened?

    When the original server for Basilisk was deployed in 2006 it was a state of the art machine and had experienced a number of hardware upgrades over the years.

    The system was configured with multiple Solid State Disks (SSD) setup to keep two on-line live copies of the data (RAID 1 mirror). The goal of such a setup is that when one disk fails we can replace it before data is lost and rebuild the copies. This setup also provides higher read rates because the system can ask both disks for different data at the same time.

    The week before this incident one of the mirrored disks failed, our hosting provider failed to notify us and meanwhile we did not get the emails from the sever alerting us to disk issues. In a sad twist of fate the second disk that was mirroring this failed drive also started to fail a week later. The odds of two disks failing within such a short timeframe are fairly rare.

    Because of the size of Basilisk's database (over 600 Gigs) we were doing backups on an ad-hoc basis to the Nova server's disks. This meant any restore would lose significant amounts of player progress. With this in mind TheAnswer worked with low-level filesystem debugging tools to extract the database files from the failing drive. This was a painful, slow process that required many iterations to get the data back to a usable state. Much of it was manual and each step could take many hours to run before the results are known and decisions are made on the next step. After many sleepless nights TheAnswer was able to get Basilisk back online on August 16.

    How do we avoid this in the future?

    In response to this event we took inventory of all our services and did an analysis of our current setup. As you can imagine after 10 plus years the project had accumulated many services and servers to run our community. This setup was very difficult to maintain due to the many dependencies between various services and the underlying software, operating systems and hardware.

    After debating various paths forward the team decided it was time to overhaul our infrastructure. We decided to re-build from scratch on new bare metal servers from packet.net and utilize an open-source technology called kubernetes to manage the services as individual movable containers. We would deploy our servers on top of ZFS storage pools which would allow us to have modern data safety and management tools.

    Deploying on packet.net gives us an incredible amount of flexibility, rather than opening tickets and asking for new machines or emailing back and forth we can just launch new resources using the packet.net API. In addition to that we have "reserved" three servers that allow us to run our infrastructure and provide on-line ready to run redundancy for our services.

    Containerizing our services and using kubernetes to manage them allows us the ability to quickly re-schedule services on other hardware if we lose a node or it becomes overloaded with work. The industry is rapidly turning to support kubernetes (originally a Google technology) and by standardizing on this system we can leverage other providers if needed in the future or quickly expand our footprint in any of packet.net's datacenters.

    By utilizing ZFS for our storage system we are able to make instantaneous snapshots of the data underlying a service. We setup these storage volumes using very high speed non-volatile memory (PCIe NVMe) and join those in redundant and high-speed configurations (RAID 10). For most services we were able to deploy packet.net's block store volumes. These are high-speed (PCIe NVMe) network attached volumes that allow us to quickly move between servers if a server crashes or becomes overloaded with work.

    This combination of our hosting, containerization and storage strategy provides us with many options that were not available to us before the overhaul. This investment should power the project's needs for many years to come and will make it easier for the team to manage existing services and provide new and exciting capabilities to the community in the future.

    We expect the short term financial impact to be a bit higher as the services transition and overlap plus bandwidth to copy files into the new environment. Over time we predict the costs to be about the same as our previous setup with easily a 10x increase in capacity and capabilities.


    As of December 2, 2017, all services have been moved to the new infrastructure. Basilisk has been happily running on the new hardware since September 24, 2017 and Nova followed not long after that. We have moved everything from forums, support site, jenkins, gerrit and various other servers to the new infrastructure.

    We have daily snapshot backups that are pushed to external block store volumes so we can lose a host completely and have worst case one day of lost progression. Meanwhile we have deployed database logging on Basilisk and Nova so that every transaction is saved in storage in a way that we can roll-forward a database crash if needed by re-playing the changes that happend since the prior copy of the database.

    We maintain daily snapshots for a week both locally on the server's disks and remotely on blockstore volumes attached over the network.

    What's it look like?

    Here is a simplified diagram of our current environment for your viewing pleasure:

    Random Stats
    • Migrated 16 distinct services
    • Over 3 Terabytes of data migrated
    • 3,009,329 lines of PHP
    • 18,640,175 lines of C++
    • 102,682,949 lines of Lua
    • 12 copies of sceneobjects.db in various folders
    • 3 people actually read this far in this post.
    Next Steps

    We've started creating alert bots that send messages to a channel the staff can monitor for issues so they can help escalate as needed.

    We will be adding more alerts, testing some deep storage solutions (AWS S3 Glacier and the like) and adding more tools so other members of the staff can help with various tasks w/o being Unix admin experts.

    ~lordkator DevOps Engineer
    by Published on 06-09-2011 04:54 PM   

    SWGEmu Recruitment

    Updated: April 12, 2013
    Development Division

    We are looking for passionate programmers who can dedicate some of their free time to volunteer for this project.

    Required Skills

    - Intermediate knowledge of C++ and/or Java
    - Minimum experience with the Unix/Linux environment
    - Familiar with scripting languages such as LUA

    Recommended Skills

    - Object oriented programming and design patterns
    - Understanding of transactional systems and client/server architecture
    - Concurrent programming (threads, mutexes)
    - Experience with SQL
    - Reverse engineering skills
    - Game modding experience

    How can I contribute?

    Contributing is easy if you have set up the environment and git you just commit reviews to http://gerrit.swgemu.com as simple patches. After we approve some of your patches and we see that you're comfortable with the framework you get git access so you can commit directly.
    If you have questions about the implementations you can directly ask TheAnswer on IRC through personal messages or in the #opendev channel.

    ~ The SWGEmu Development Division
    by Published on 06-10-2015 09:20 PM   

    Join Staff - SWGEmu EC/CSR/Support training program

    June 2015
    The SWGEmu

    Greetings SWGEmu Community!

    As a volunteer project, SWGEmu is constantly looking for new community members to step up and assist and support the community. If you are interested in taking on the responsibility of an SWGEmu Staff member to further our goals, we encourage you to take the time to apply for our SWGEmu
    EC/CSR/Support training program.

    The SWGEmu EC/CSR/Support training program is designed to introduce community members to different aspects of the duties and responsibilities of the SWGEmu Staff, while giving them a chance to develop a close relationship and eventually join the SWGEmu Staff team.

    The following are the minimal requirements to be accepted into to this program:
    • Professionalism
    • Excellent work ethic
    • Positive attitude
    • Ability to work and learn in a team environment
    • Sufficient free time
    • Knowledge of the SWGEmu Project is required

    Program Details
    1. Apply to the SWGEmu EC/CSR/Support training program by submitting an application (see below).
    2. After reviewing the application, an interview may be offered to accept or decline the volunteer.
    3. Once accepted for SWGEmu EC/CSR/Support training program, the volunteer will be mentored by a member of the SWGEmu Staff for a period of 2 months.
    4. During these 2 months, the intern will assist in IRC, on forums and on Live support.
    5. After the SWGEmu EC/CSR/Support training period has ended, an evaluation will be completed by the SWGEmu Staff to determine if the SWGEmu EC/CSR/Support trainee has met all duties and expectations expected of them.
    6. The trainee may then remain as a support assistant or apply for an available position in the SWGEmu Staff.
    7. If accepted, an apprenticeship period of 4-8 weeks will begin and the apprenticeship will work closely with the staff member of the department they applied for.
    8. At the end of the apprenticeship, another evaluation will be completed to determine if the intern would be a good fit for the chosen SWGEmu Staff department.
    9. If accepted, the apprentice will be offered the opportunity to join the SWGEmu Staff as a full-time member.
    10. The new SWGEmu Staff member will perform their department's duties as well as continue assisting with Support.
    Upon completion of the internship period, the following positions may be applied to for apprenticeship:
    • Support Staff (Community Support Representative)
    • Event Coordinator
    • Community Relations (Forum Moderator)
    We are very excited to be offering this program to our community. It is our hope that it will bring forward community members that want to help SWGEmu succeed and provide our Community with the best experience possible.

    Submitting Your Application

    Please complete the application below and email it to application@swgemu.com with 'SWGEmu Internship Application' as the title and the application attached. If you are selected to participate, we will contact you by email informing you of such and provide information about the next step in the process.

    What is your forum account name?

    What is your in-game character name?

    What is your IRC nickname?

    What is your age?

    What is your profession in real life? If you are a student, what are you studying?

    What time zone do you live in? What times/days of the week will you usually be available to spend time assisting in Live Support?

    Do you know any current or former staff members? If so, how?

    How long have you been following this project?

    Why do you want to become a member of the SWGEmu EC/CSR/Support training program?

    Please write a short paragraph detailing what Pre-CU Star Wars Galaxies means to you.

    If you are accepted into the SWGEmu EC/CSR/Support training program and are looking at applying for a SWGEmu Staff position, what position would you apply for and why?

    ~The SWGEmu Staff
    by Published on 09-08-2014 02:31 PM   

    Please read all SWGEmu Rules And Policies

    September 2014
    SWGEmu staff


    The only exception to SWGEmu Rules and Policies is the Sarlacc Pit 2.5 section, which has its own set of Rules and Policies.