The IT Road Less Traveled

by Thought Leaders at SwishData



Wired Reporter’s Hacked Accounts Should be a Warning to Cloud Customers and Admins

By Jean-Paul Bergeaux
Chief Technology Officer, SwishData

The recent hacking of Wired reporter Mat Honan made many people rethink their personal web security. He explained how the lack of two-factor authentication allowed hackers to social engineer a reset on his passwords and get access to his Amazon, Gmail, Twitter and iCloud accounts, as well as his iPhone and MacBook. They wiped out his entire Gmail history, his iCloud account, and data on his iPhone and MacBook (which he was able to pay $1,700 to recover). Lucky for him, the hackers were interested in pulling a prank rather than stealing his money, or they could have probably accessed his financial accounts and done even more damage.

However, this should be more than just a warning to personal account holders. This should be a warning to enterprise admins that use cloud services of many types. Make sure your users are taking the highest security measures possible. You and the other admins in your organization should also hold yourselves to that standard. If someone was able to gain access to an admin’s account that had control over an entire enterprise’s cloud infrastructure, imagine the damage they could do! Some cloud offerings have backup and disaster recovery (DR) as part of the contract, but if you’re hacked, how do you know that those policies cover that kind of damage? Or that the hackers won’t follow through to eliminate those copies as well?

It’s a Scary Thought, But It Doesn’t Have To Be

The only way to completely prevent that kind of situation is to have a non-cloud, internal copy of the data either offline or separated electronically from your cloud copy. A hybrid cloud would be more likely to survive a hacking situation. The internal copy or private cloud should be separate and secure.

At SwishData, we generally use public cloud for DR copies of the data and private cloud as the primary production copy. This is not because of the security concerns, but because often the total cost of ownership of private cloud is better than public for the primary production copy. DR and backup copies are perfect for cloud for multiple reasons. The most obvious reason is that DR sites tend to be similar in cost to primary sites. Yet, the requirements and usage footprint of DR copies of data are significantly lower. Those lower needs allow for a more cost-effective cloud service to be used.

Now, after the recent news, we’ll include some added notes about security policies for users and admins in our recommendations to customers considering cloud services.

Want to hear more from SwishData? Visit my Data Performance Blog, and follow me on Facebook and Twitter.

Posted by Jean-Paul Bergeaux on Aug 28, 2012 at 12:18 PM0 comments


IT People are Still Human

By Jean-Paul Bergeaux
Chief Technology Officer, SwishData

For years, people have made fun of the fact that VHS beat out BetaMax. It’s become an iconic reference to the fact that the best technology does not always win. And it’s truer than most think. I had lunch with a friend recently who was telling me why he chose Citrix, which I don’t really fancy. I wanted to know why he chose Citrix’s desktop virtualization solution over VMware’s View. Full disclosure: I am hip-deep involved with VMware. But I recognize that I am indeed human, can “drink the Kool-Aid” and possibly could have the wool pulled over my eyes. IT people are only human.

The friend I mentioned is a very intelligent and informed IT professional. He’s really good in a lot of areas. The old saying goes in IT, “You can’t be good at everything; there are too many fields to learn about.” My friend is one of the few who bucks that trend. So when I asked him why he chose Citrix, I was hoping that he would enlighten me. His reply was, “I’m human.”

In his last job, someone else had chosen Citrix, and he was the administrator using it for a while. He was generally happy with it and felt very comfortable deploying it from scratch this time, probably doing a better job than the installation he administered. It was easier to stick with what he knew because it could get the job done. He didn’t have time to do a full comparison, and if he chose View, he would have had to learn a whole new system, and he’s overwhelmed trying to fix things at the new job.

For him, I think it’s a perfectly fine answer. Really, I do. Comfort can ease an administrator’s transition, depending on the situation. If I had caught him early enough, however, I do believe that SwishData could have relieved the burden of the transition, and quickly given him a good education on the View alternative. Too often though, IT people choose the comfort route as the default without a good thoughtful reason. It’s frustrating to system architects who are committed to designing a best-of-breed solution. Most times — no matter how comfortable you are — you just can’t default to what you’ve always done. It’s just a short cut that many take with no intention of correcting later. In my friend’s case, he said, “I just need to get this up and running now — I’m open to changing over to View later.”

I respect that because I believe him. If we had been able to help him earlier, maybe the answer would have been now, rather than later. Without that option, there were too many spinning plates and not enough hands right now. He’s only human.

Want to hear more from SwishData? Visit my Data Performance Blog, and follow me on Facebook and Twitter.

Posted by Jean-Paul Bergeaux on Aug 21, 2012 at 12:18 PM0 comments


Who is Tracking Government Agencies’ Disaster Recovery Policies?

By Jean-Paul Bergeaux
Chief Technology Officer, SwishData

Mandates continue piling up for IT management at government agencies. There are so many mandates to comply with that agencies seem to pick the highest priority by which one OMB or another oversight group has asked about most recently. For that reason, I question: Who is paying attention to whether or not agencies are complying with disaster recovery (DR) mandates?

SwishData engineers and system architects run into all levels of existing preparedness or non-preparedness for disasters. Sometimes, there is a ‘compliance’ look to the design, but in practice no real recoverability from a disaster. Often, if there is a DR practice in place, it is rarely tested. Admins fear it may not work if it’s actually needed. Why is this acceptable when much of this government data is regarded as high-value?

The best agencies have true push-button hot sites at multiple locations that are far enough apart to truly be prepared for a disaster. A sterling example would be the Marine Reserves’ failover during Katrina. On the other hand, there are sites that hardly make backups because of tape failures, and the copies are kept at the same site as the production data. It’s frightening!

There has been lots of talk about cloud as the solution. However, often this means farming out the production data and hoping that the service provider has a DR plan without actually having a requirement to understand that plan. It’s a black box that just has to be trusted. Google and Amazon are prime examples. That’s not a solution — that’s piling on to the problem.

SwishData has focused on actual DR and push-button failover designs since the inception of the company. It’s actually what SwishData was founded on. Since then, we’ve added more parts to the solution, such as mobility and faster access to DR and remote sites. We’re paying attention.

The only people who seem to have a complete DR plan and design are the ones who do it out of their own core belief that it’s critical to their mission. Good for them. So maybe this is a self-serving rant, but who’s monitoring the rest of the data centers? Maybe no one is.

Want to hear more from SwishData? Visit my Data Performance Blog, and follow me on Facebook and Twitter.

Posted by Jean-Paul Bergeaux on Aug 14, 2012 at 9:03 AM0 comments


Solving WAN Issues Takes More Than Adding Bandwidth

By Jean-Paul Bergeaux
Chief Technology Officer, SwishData

Government agencies are faced with more WAN problems today than ever before as a result of data center consolidation, mobile and telework computing, and public cloud computing all converging on IT at once. At the risk of making enemies with bandwidth providers, I have to speak out. Stop the madness! I am surprised at how many times IT and network admins run to these providers to solve WAN problems. It’s not going to work. The problem is larger than bandwidth.

When chatty apps attack

Often, after adding bandwidth, organizations find they are not using all of the available expensive new bandwidth, nor has the application experience improved. So what gives? Resolving WAN network issues is significantly impacted by how well the solution integrates with your applications. There are many facets to this point, but I am specifically thinking of solving issues applications have with distance. Anyone who looks beyond bandwidth providers to solve network performance problems will look to WAN acceleration products. However, even a basic network acceleration product that does not integrate with applications will probably not solve your problems.

Applications are designed for a local LAN and tend to be ‘chatty,’ which means they have short, frequent conversations with end-user computers. When those computers are moved far apart, latency becomes an issue in those conversations. If you’ve ever watched a low-quality satellite interview with a reporter, you might have noticed the wait time between one party finishing a sentence and the other responding. It’s pretty annoying to watch an extended interview like this. Now imagine if the questions and answers were short, such as one sentence. That might take a while. That’s what’s happening to applications over the WAN.

How to fix it (not by spending a fortune on bandwidth)

The real answer is an application-aware, smart network optimization solution, such as the Riverbed Steelhead appliance. These types of products actually understand layer-seven information and optimize how the application communicates with the user. Steelhead is unique in that it is able to break open the applications packets even if they are encrypted, such as secure Exchange 2010. There are also solutions that target the application itself so that roundtrips never get initiated, such as Riverbed’s Aptimizer in its Stingray line of products.

Both are incredible products that have near-immediate return-on-investment potential while solving user complaint issues. The kicker? They reduce bandwidth usage typically by 60 to 95 percent, allowing some organizations to even reduce their network contracts when they were looking to increase them. Anyone who puts together a mobility or data center consolidation solution must look at these products. Spending a fortune on network bandwidth isn’t going to solve the problem.

Posted by Jean-Paul Bergeaux on Aug 07, 2012 at 9:03 AM0 comments


Data Protection is Not About Backup

By Jean-Paul Bergeaux
Chief Technology Officer, SwishData

Too often, data protection is an afterthought and many times people focus on the wrong concepts — to a lot of people, data protection just means having backups. But think about this: Isn’t how quickly you can access that backed up data what really matters in the event of an IT failure or physical disaster? That means the most salient part of data protection is restores.

Many organizations rely on technologies like snapshots, as they should, for temporary data protection with little production impact. Unfortunately, some rely completely on those, as though they are the whole solution. They’re not! It’s important to understand the different functions of a complete data management strategy, and snapshots are only part of the picture. Why do IT professionals do this? The answer is simple: Backups and backup windows can be a major pain.

Integrated Backups, Quick Data Recovery

IT professionals should consider an integrated backup solution that allows for recovery of data within minutes, and one that allows for backup of heterogeneous storage-based servers with a very low-impact backup. One that offers advanced recovery capabilities is a huge plus as well. In all honesty, it should only take a few clicks for an admin to restore an entire server, volume or individual file.

Peter Eicher of Syncsort Software drove home the importance of a fast restore by mentioning to me the great example of RIM’s infamous BlackBerry outage. Had service been disrupted for 15 minutes, complaints would have been minor. But an hours-long outage results in irate customers. Any longer, and you’re left looking for new customers. When you translate this example to government, the consequences become even more significant. What if the mission — maybe even lives — depends on access to data? What if hours, as opposed to minutes, to restore data means the mission is compromised

Your Mission Hinges on Restore Time

The right solution can deliver a combination of 95-percent faster backups, 95-percent VM backup impact reduction and 99.99-percent backup success rates, all while using 90 percent less storage. Remember: restores, and fast ones, are what matters in this 24-7 world we live in. The story of RIM’s outage wasn’t about lost data; it was about how long it took to restore service. The lesson-learned here is that it is important to have the ability to restore any backup as a virtual machine, almost eliminating the restore window entirely by directly mounting a backup image and using it immediately. Instead of hours to restore a disk or entire system, it takes minutes or less — and that is the difference between opportunity missed and mission accomplished.

For more details on how SwishData can provide these results for your organization, contact us at www.swishdata.com.

Posted by Jean-Paul Bergeaux on Jul 31, 2012 at 9:03 AM0 comments