Unfortunately Zelda Universe has run into some major database issues (for all the technical details, see our thread on the forums), and we have lost all of our posts and articles from June 12th up until this point. This as you can probably tell throws a lot of things out of order, so it will take us a while to re-add staff and fix a bunch of errors and everything else these issues have caused.

Our forums were slightly more lucky and we have only lost 3 months of its data (dating back to last November) but if you joined the forums in the last 3 months you will have to re-register. If you need something changed on the forums, refer to my thread for such issues.

This is an unprecedented crash for ZU, so please bear with us as we sort out its many implications.

-Cody, ZU Webmaster

  • Duncan

    Best of luck getting everything back in working order.

  • Kelly

    I'm sorry Cody! I really wish I could help! Anything that I can do, just tell me!
    I wish you luck getting everything back together.

  • shadowoflight

    There are plenty of fairly cheap ways to recover data, you know.

    • CodyDavies

      I run the site on the people side not the server side, but the server people will work on that kind of thing when they wake up (time zone differences).

      • shadowoflight

        That's a different story then, I hope they're able to recover all lost data. 🙂

  • Lup

    If it helps any, here's an html export from my RSS reader containing all of your main site posts since August 10th: https://www.dropbox.com/s/xveh2kz6cvbe0os/Zelda%2

    Might be easier to recover everything from that than Google cache or similar 🙂

    • That's amazing – thank you so much! With that export, we'll be able to repost everything we've lost.

      We definitely owe you one for this. There aren't enough words of thanks for that export 🙂

  • Lup

    If it helps any, here's an html export from my RSS reader containing all of your main site posts since August 10th: https://www.dropbox.com/s/xveh2kz6cvbe0os/Zelda%2

    Might be easier to recover everything from that than Google cache or similar 🙂

  • Anon

    welp

  • Vladislak

    Sorry to hear that. Oh well, it's nothing to get too upset about, so sense crying over spilt milk right?

  • I hope you guys recover as much as possible! It always sucks when things like this happen. You can only prevent so much. 🙁

  • Destiny

    That's sad, I know exactly how u guys feel…I once wrote a big story, and I accidentally clicked something, gone, deleted, trashed, bye bye, never to be restored, :/ year of typing gone! So I hope you guys can get it restored! 🙂

  • K2L

    Maybe if you guys used a more reliable server, none of this would have happened. But hey, Jason Rappaport is Jason Rappaport, soooo……

    • Jason Rappaport

      Everyone encounters problems every now and then! Nobody's perfect, and we're no exceptions. But we care and we do the best we can. I for one am so proud that everyone quickly came together after this event and got the entire site restored in just a day. Not just any community would come forward and do that – it's pretty magical.

      I also suggest you check out our thread on the matter so that you understand what happened. I think it'll help: http://www.zeldauniverse.net/forums/feedback-sugg

      • K2L

        Of course everyone has problems, but this isn't even the first time it happens. Zelda Wiki was in a really bad state due to poor server transfer back in 2011, when almost daily we got news about Zelda, especially Skyward Sword (and for those who don't know, Zelda Wiki and Zelda Universe are run by the same server), making us unable to edit the articles in order to add the new info. That's one (out of MANY) reasons why I'm not at the wiki anymore, and one of the reasons why I don't feel confident at joining this site.

        • I'm sorry you feel that way! I won't try to skirt the issue; we've had some serious server mismanagement in the past, mostly due to monetary constraints. We just couldn't afford servers that could handle our massive amounts of traffic. For over a year now, though, server issues have been few and far between, with only the occasional downtime to move servers (like last October) or add new ones.

          However, things have changed since 2011 (it's two years later!), and Zelda Universe and Zelda Wiki actually haven't been on the same servers for a while. That's why Zelda Universe crashed, but Zelda Wiki didn't.

          We were also fortunate enough to get sponsored by Microsoft, so since last year we've actually been running on a shiny new cloud server setup with virtually unlimited resources at no cost. However, these cloud services from Microsoft are still fairly new, and some services are actually in beta and are not documented well, which lead to the configuration error that caused the current crash. You could say we shouldn't run beta software from Microsoft that isn't documented well, but they're phasing out the alternatives and we actually really, really like the beta product. Barring our own human error here, it's never once gone down and is really intelligent.

          Anyways, nowadays we actually know what we're doing and don't rely on any external support to set up and run our servers like we used to. Although it may not seem like it, Zelda Universe and Zelda Wiki together are on a cluster of multiple cloud servers with plenty of redundancy, bandwidth, and horsepower to spare, and we have a great server administrator named Scott who very skillfully recovered us from this debacle.

          tl;dr: We know what we're doing now, unlike two years ago. 🙂

          • K2L

            I appreciate your warm replies, I didn't know the two sites were already split into different servers (it's been one year since I was zapped from the wiki due to an unfortunate incident, so I wasn't aware of what happened afterwards). Sigh, I'm just wishing things had happened under better circumstances. I hope you guys can overcome this incident.

  • Rob

    If this site was "hobbyist" then fair enough, I'd feel bad for you. But it's much more professional than that (at least it appears so to the outside – which is great!). However, "professionals" should know that regular server-side backups should be taken! This whole crash should not really be an issue! I suggest you all look into backup procedures ASAP to prevent future problems.

    • Jason Rappaport

      Actually, the issue was one of configuration! We do normally have backups running daily. However, the same issue that affected our database also affected its backups. It turned out to be a perfect storm of "we messed up" – a small but critical oversight several months ago that came back to bite us.

      Needless to say, although we're no strangers to running servers, we've learned our lesson from this one and quickly squashed the bad configuration. This won't be happening again! I'm incredibly happy that the community has come together and helped us restore what we've lost, which is amazing. ZU's already back up with all of its stuff 😀

      • Rob

        If it was an issue of configuration, does that mean to say the backups weren't configured to run properly 😛
        In a strange way I'm happy to have read above that it was kind of Microsoft's fault… although perhaps "undocumented features" could be properly tested in future using a dummy version of the site before using them for real? Nevertheless, I think we're all pleased and impressed with the speed of recovery.

        • The backups were configured, but the issue was storage on the database servers. Apparently Microsoft creates both permanent and temporary storage, but they were either not clearly marked, one was hidden, or something confusing was going on. In any case, it was non-obvious enough that our server administrator was able to mistake the temporary storage for the permanent storage. Since the temporary storage is not emptied that often, it took until now for something to happen that would make us notice.

          The fix was pretty simple, though: don't use the temporary storage, haha. Unfortunately, because that was used for the whole server, backups AND primary copies were wiped out. Our fix for that, which I consider a separate issue, is to have separate backup servers and multiple database servers for better redundancy.

          We always test these things beforehand, and actually had Zelda Universe in test mode for several months before moving servers. However, due to the poor documentation and that nature of the configuration (it seemed to be working until it didn't work) there was no way to know that things weren't set up right until the giant explosion. It's also hard to know about undocumented features if they're not documented, haha. Nevertheless, we've definitely gotten a handle on it now and it won't be happening ever again.

          • Rob

            Thanks for your replies Jason. As a bit of tech geek (or, tech-tite, if you will 😀 ), I had been very curious. I can definitely see how problems like this may only present themselves after some time. I'm just imagining how it must feel to be the server administrator (Scott?) – checking the server and finding everything had gone! I feel really sorry for him – he must have been tearing his hair out trying to find the cause. Congrats to you all on coping so well by the way, and also thank you for keeping us all updated. My earlier comments weren't criticisms, but rather my cynical way of challenging you to not get complacent and continue to imagine worst-case-scenarios. It never hurts to plan for the worst 🙂

            Just a quick observation. You mentioned about using separate backup servers from now on. Whilst that certainly sounds sensible, it surprises me that this wasn't already the case. What if the main server location were to have a power cut, a fire, or maybe even come under some terrorist attack? These are the kinds of scenarios that huge corporate organisations plan for, and their strategy is exactly as you say – use separate backup servers (…. PROVIDED that they're in a completely different location). That last bit is important, and I only wanted to bring it to your attention just in case. Hope this helps 🙂

          • I’ll be perfectly honest as I respond to your question about backups: I actually thought we *did* have separate backup servers. It was a huge surprise to me that we did not. From what I thought I knew, Zelda Universe was spread across two virtual file servers, which share a cloud storage bucket, and two virtual database servers, which provide enough redundancy as to behave like a remote backup.

            Apparently that wasn’t the case, and it turns out we only had one database server set up. Obviously this is a totally different oversight than the technical issue, and may not have prevented the crash if both database servers were set up incorrectly. But yeah, it was a gigantic surprise to me that we weren’t doing regular offsite backups, and I’m making doubly sure that it’s rectified now.

            You might wonder why I don’t know ZU’s server configuration back to front, since I own and operate the site, but I try to leave that Scott who knows this stuff much better than me, and by and large he does an amazing job.

            As for remote backups, MS provides at least six or so options for storage locations on their cloud service (Windows Azure), so we should be good to go as far as that’s concerned. Now we just need to get to work and set it up!

  • shadowoflight

    I'm not complaining, no one should.
    After all, who pays the bill to bring us the news at the end of the day? (besides fundraising for a major overhaul that is)