I'm curious why someone decided it was a good idea to not use donations to replace the HDD drive lost a week before and go ahead with the update.
I'm curious why someone decided it was a good idea to not use donations to replace the HDD drive lost a week before and go ahead with the update.
I think it's easy to be an armchair critic and get upset as to why you think x or y should have happened. What would be harder would be to empathize with what the team is going through while developing a game.
In my mind you have to prioritize what is most important and I doubt redundancy is at the top of their list right now. I mean it's been stated many, many times that this is a development phase so don't get to attached to any technological terror you have constructed.
This event was not only possible, but should have been expected.
My thanks the the developers for all the hard work. I had my doubts long ago as if this project would get off the ground and I have been surprised how much it has soared.
Pub 9 represents a day long anticipated and I think represents the majority of the ground game as I remember it long ago. The development and implementation of jump to lightspeed will have some affect on the ground game but is mostly indirect.
A wipe is not the end of the world or the game for that matter, its a new beginning.
Retired Community Relations
SWGEmu is a non-profit, open source community project.
Is what I am doing in the game wrong? How do I know? If in doubt, contact Staff.
SWGEmu Guides and Helpful Links / complaint@swgemu.com for violations or complaints / LIVE Support
Had we known we would have replaced it. This isn't a situation where we made a conscious decision to ignore a broken HDD.
**** happens, in our case its a massive amount of ****, but oh well, what's done is done. Instead of throwing blame, which btw will change absolutely nothing, we need to roll up our sleeves and start shoveling it.
I'm not blaming anyone.
"While doing a routine backup before the merge one of the HDD's died, unfortunately that was the second one we lost in a matter of weeks."
This leads me to believe a new HDD could have been purchased prior to this merge but it wasn't. I'm not against the wipe, I'm asking why the HDD drive that died weeks ago was not replaced before arguably the most significant patch to date. Which in my mind is a perfectly reasonable question to ask.
Hindsight is always 20/20, if they were going into this merger thinking something is going to mess up I'm pretty sure they would replace it. But I'm sure they thought it would be a routine merge and all would go well.
But it didn't and now they get to hear from all you guys, which I'm sure is one thing they don't ever look forward to
Because HDD wipes aren't predictable. HDD failure rates are quoted as MTBFs - Mean Time Between Failures - which means that statistically, there's a 50% chance the disk will last longer and a 50% chance it will already have failed. I've had HDDs last for over a decade and some start to fail within their warranty.
This is a volunteer project that is focused on development and Basilisk is essentially a live test server; there's no guarantee of continuous service and we know that once the project is complete it will be wiped anyway. If this was a business, I'd totally be with you, but it isn't. I guess what it comes down to is this: you're getting alpha access to a game in development and you can choose to donate or you can choose not to donate. Either way, the development goes on and until a golden master/release candidate is ready the devs really don't have an obligation to make user progress ironclad. It's no different to installing a beta OS on your computer, there's always a disclaimer that says "if it ****s up your system, tough ****."
I'm kinda bummed, I was hoping to get home tonight and see the old man. But the more I think about it, it'll be kinda cool to see a hell of a lot more activity and people grinding again.
The wipe isn't what upsets me. It's unfortunate but that's okay, its a test server. But there's no good reason not to have proper backup practices in place these days. 3-2-1 rule is the way to go. 3 copies, 2 physical devices, 1 remote. Lots of free software to do it too.
There are currently 1 users browsing this thread. (0 members and 1 guests)