Tuesday, July 20, 2010

Malware Attacking SCADA Systems - from USB Device

A really interesting article that I think we should all be aware of -Microsoft Investigating Windows Zero Day Trojan brings to light an even bigger threat to our overall ecosystem and economy from Cyber Terrorism.

For those that may not be aware of the importance of SCADA systems - you may want to recall the brown out a few years ago that took out the electrical grid from Ohio to New York. Many do not know that it was believed to be caused by a virus that was infecting the reporting system. These systems power nuclear plants, electrical grids, oil pipelines, etc.

This article brings to light very clearly that as a Global economy we have to think about the technologies we put in place and their impact. These types of viruses should not only be a concern for USB devices on SCADA systems but also those embarking on their Journey into client virtualization.

Why worry? Virtualization exponentially increases the threat of security risks to companies and our underlying infrastructure. How? VM sprawl and undetected/unregistered virtual applications that have security holes in their virtual operating systems. While SCADA systems are pretty locked down - if a USB device can communicate with the rootkit of the underlying operating system what about virtual operating systems that can go undetected by traditional inventory programs?

For VMs in the wild - they may not have inventory installed or be accessible on the client systems (not like VSphere in the datacenter) when the VMs are offline. Application virtualization poses an even greater threat here.

Typically inventory searches the registry for key elements that identify there is an application installed and Patch Management tools will apply the patch to the underlying OS. But if the OS is virtual unless it is specifically integrated or programmed to do so - the traditional tools will not see the virtual OS or be able to patch it. If the person using the virtual application has administrative rights to their machine - then the virus can continue to exploit the vulnerability within the virtual operating system and pass through to the underlying PC.

What are ways around this?
  1. Lock down the PC - disallow administrative rights. This is hard to do of course for some organizations as many legacy applications still require administrative rights to function.

  2. Register Virtual Application - ensure the virtual application allows you to register it with the underlying Operating system (For example with ThinApp they use ThinReg). Do not use technology from vendors that do not provide some mechanism for alerting the physical system that the application is there.

  3. Ask you Inventory & Patch Management Vendors if they support that application type - some vendors do have integration with traditional tools such as SCCM, or BMC. Tools like BMC Bladelogic for Clients (Marimba) have the ability to provide inventory for applications deployed through their system. This is useful to at least provide base inventory when there is no clear out of the box integration. I would also recommend requesting support from the Systems Management Patch Vendors to provide some type of hook into these solutions to quickly patch them without repackaging. This last part is one of the biggest inhibitors to broad scale adoption of application virtualization beyond just a handful of applications.

  4. Create Process with Service Level Agreements to patch the Virtual OS - Many companies I have worked with over the years have set SLAs to quickly apply patches to their many computers out there. How do they do it across dozens of virtual applications? It depends on the architecture of the virtual application. Make sure you work with your Vendors Services team to create a Disaster Recovery plan for Zero Day viruses such as this to ensure the Virtual OS receive the same patches on a monthly basis as part of your overall patch process.

  5. Only run virtual applications in User Mode - When possible eliminate the administrative rights. Most of the SCADA systems are pretty locked down. What makes the USB trojan even more worrisome. Companies that are choosing to leverage application virtualization should take their overall imaging and rights management process to the next level. Now that you have technology that can lock down access rights - use it.

Some virtualization vendors will claim anti-injection etc. Which is great but you are only as strong as your weakest link. It is important to really think through the security ramifications prior to deploying virtualization technology (Virtual Machines or Applications) on clients. Make sure they fit into your existing SLAs and don't put your company at risk.

Jeanne Morain

Wednesday, July 14, 2010

Application Virtualization Journey - Begins with a SINGLE Step

Many of my favorite customers have asked me where to begin with application virtualization. A few have been very proud that they selected their first 15-20 applications to migrate over and now believe they are ready to go. My best advice to all of you is to STOP - take baby steps when approaching application virtualization.

Why? Because it is not that simple and creates as many issues as it solves (if not more). Don't start out with 15-20 applications - pick 1 or 2 and their dependent applications. Look for those that have the highest ROI for the company to enable you to do the following:

1) Build your Business Case - Applications like CRM, Custom Applications that can't be migrated over to new OS, etc make a perfect test case. They are typically complex, with many dependent pieces and have lots of calls to the underlying Operating System. Pick ones that have no other option and a low user population - don't ever start with Outlook or Office. Remember -similar to Business Service Management - if you try to swallow a fish whole - you WILL choke on it! Cut it up into bite pieces, get rid of the bones and unnecessary elements and you can have a delicacy to be savored.

2) Identify Risks to the Business - With anything there are risks - that rings especially true for NEW technology. The earlier a technology is in their release cycle the more stability, performance and defects will be uncovered. Lessons learned only come from the experience of discovering what you didn't already know or assume before. The application (and dependent components) should enable you to deploy a complete cycle to not only test your ROI assumptions but to discover hidden costs and risks. Hint: Network, Performance, Disk Space, Integrations Required with existing tools, etc should all come up during your initial pilot. Some risks are minor - others can be significant depending on implementation architecture and route taken. More to come on this topic....

3) Formulate Routes to Value (requirements) - Similar to Business Service Management - how the solution is architected really depends on the objective one has in mind. With Application Virtualization there are several possibilities - the requirements should vary based on the end goal. For example, are you deploying Application Virtualization to reduce storage capacity for your Virtual Desktop Infrastructure? OR Are you deploying it to migrate a legacy application to another OS? OR Are you deploying Application virtualization to reduce system dependencies for your Cloud/SaaS implementation of a single application? OR Is it to reduce the footprint of your Citrix Server Farm?

The architecture that works best for each scenario will vary depending on the end objective. An agent based tool such as AppV may work better for OS Migration due to contracts already in place with Microsoft, while an agentless tool such as VMware's ThinApp would be the best solution for Cloud/SaaS deployment (nothing to install on endpoint), etc

4) Solve a REAL Problem - Although Vendors have interesting opinions - they are not as relevant as customers. Why? Because YOU are the ones that have to put your job on the line to deploy their technology. YOU know your environment and pain points far better than ANY vendor. Having deployed to millions of endpoints - I can safely say that no two environments are EXACTLY the same (although there are similarities). Customers never ceased to amaze me with how they used technology to solve problems it was never intended to (why? Because vendors didn't even realize it existed). Don't buy the Hype cycle around Migrating to Windows 7 or other events that will drive you to bite off more than you can chew at the given moment. Yes - Application Virtualization can help you migrate (don't get me wrong) but just because you CAN doesn't mean you Should. If you are not ready yet (Educating your Workforce, Development Team, Understanding Risks & Rewards, etc) - it is better to step back and take a test run before you bite off more than you can chew.

Real Problems I have seen that are compelling are DLL Hell (Finally isolating those badly behaving applications), Reducing the Footprint of your Citrix Farm, Reducing Reboot time and time needed for back outs on 24x7 facilities (such as call centers), enabling Test/Production on the same machines for longer beta cycles, and many others.

5) Educate your Company/Team - Pick application(s) that will enable you to educate your user population on the value they will gain before you leap. When I say User I mean all users (includes IT, Support Staff, End User, Executive Team, etc). By having a small pilot with no more than 500 users - you can quickly understand what types of questions virtualization will bring that you did not anticipate (FAQs and Training needed for the masses), You can also determine the impact to current reporting and provisioning tools, License compliance for Regulatory or Software usage (are there new tools or reports that are needed), Service Level Agreements (if there is a patch for a security hole, reducing trouble ticket turn around, reducing call volume into help desk - good for ROI too), and last but not least your end users.

Each week I will try to share Lessons Learned along the way - from my experience - to enable customes to drive vendors and the market to evolve. Similar to Business Service Management, Server Virtualization, and other new markets - Application Virtualization is needed and has a compelling ROI - the People, Processes and Technology all need to evolve for the real benefit to be realized and mass deployments to occur. Today - there is not one solution that has it all - so proceed with caution and select the right one for your company.


Friday, July 9, 2010

Multiple Versions if IE Not Supported - What it means for Application Virtualization

Lack of Support for Running Multiple IEs Impact on Application Virtualization

One of the benefits that Application Virtualization provides is enabling customers to migrate legacy applications across operating systems without impacting the end user or incurring significant costs in testing and rewriting the application to be compatible with the new version of the OS. This has become particularly important for those that believe XP Mode (lack of interaction between applications on the new OS) will not be sufficient enough to enable Win 7 Migrations (as many skipped moving to Vista).

Microsoft has stated they will not support multiple versions of Internet Explorer on the Same OS - particularly when used with application virtualization. - http://support.microsoft.com/kb/2020599/

What does that Mean for Application Virtualization?
The actual magnitude of impact really depends on the architecture that the solution uses. There are several different architectural approaches to application virtualization. Depending on the approach - this could be a significant risk for customers beyond the intentional virtualization of Internet Explorer.

There are essentially 4 types of architectures that exist currently in this market.

1) File redirection - The files are redirected to a different portion of the OS but are still technically installed on the machine.

2) Agent Based Virtualization (Agent installed in OS) - agent is installed in the Operating System and redirects calls to isolated applications. The agent needs to be sequenced etc to determine how much memory consumption and other system resources to allocate based on precedence.

3) Agentless Virtualization - The file system and code to run the application is embedded within the virtual application. The Virtual Operating System is contained within the Application to provide everything the application needs to run independant of a full OS (registry keys, specific components) and communicates with underlying OS.

4) Virtual Client/Agent Hybrid - The fourth architecture is based off of a virtual client that leverages some form of file system and manages all the components independently. It combines the approach of having the agent (without installing it in the OS) and the virtual OS.

How does this translate to impacting product support?
For most application virtualization solutions it literally means that only 1 copy (ideally the one that is shipped as part of the OS) is supported. We all know this is not realistic or possible particularly when newer versions of IE may break mission critical applications.

One would either have to uninstall the version of IE that comes with the OS and use only a single version of the virtual IE across platforms. There is still quite a bit of benefit in this approach. The biggest benefit is reducing costs across migrations from one OS to the other but also being able to support multiple OS with the same version of IE without having to do a significant amoutn of regression testing. It would be equivalent to what is done in most IT shops today for physically installed IE - say when IE 7 came out but many still used 6.

Now it does also lend the ability to run a side by side pilot of a beta version. Meaning that for a small group during the pilot phase - although there would not be broad based support chances are the benefits of having existing pilot users have access to the current version while the new version is being tested is more valuable and less risky then trying to not allow them access. However, the key thing here is nothing installed means nothing to back out and less corruption issues with the base OS.

Architectural Risk
Although the architectures that fall in the 3 & 4 category (Agentless and Hybrid) have the biggest benefit to customers in terms of portability and reduced risks from not having anything installed on the endpoint - there are some significant risks and considerations that come into play with this approach given Microsoft's support policy.

For products like ThinApp that have written their own virtual operating system - the risks are far less. The reason being is according to the written statement is 1 version of IE per operating system. Although I am not a lawyer - having done EULAs in Product Management for a little under 15 years - I do know there is some leaway in language. Microsoft specific support policy states only 1 version of IE per OS. Now given a product like ThinApp that has it's own virtual OS one could say that the virtual IE is running on it's own OS (VOS). The likelihood of having significant impact on the underlying OS would not be as great depending on how the package is created to interacted with the base OS. Remember - for critical applications it is in YOUR best interest as the customer to check with your OS vendor on what their policy is - I can not speak for either Microsoft or VMware on this one.

Then what is the big deal? Hidden Risks
Certain Application Virtualization solutions actually use IE as their virtual file system in lieu of writing their own. That means that with each application - one is running an instance of IE on the base OS. The support policy would extend beyond just intentionally virtualizing IE to migrate to a new OS but would apply to all applications that are being virtualized.

What should a customer do?
Ask the vendors you are considering about their architectural approach so you can make an informed decision as to whether or not this is an issue for your organization. ie)Is it a critical application that has key MS components or is manufactured by MS (like Outlook or Office) that you must have support for or is it a home grown solution based off of Java or some other component that does not require any support from Microsoft.

Either way it is better to understand the pros and cons of ALL the architectural approaches prior to deciding which application virtualization solution to use. Although a few are close - there is not one single solution that has actually achieved the Nirvana that many customers asked for in order to obtain the true Universal Client.

Leap with caution as the technology and market matures to understand what should and shouldn't be done versus can/can't. Remember just because you can do something - doen't mean you should. I have seen some questions on applicability of Application Virtualization - I am a firm believer that it is required to unchain applications from the OS and achieve the True Universal Client. But like all technologies just needs time.

Stay Tuned - next week regarding Implementation Considerations and Hurdles when deploying Application Virtualization within the Enterprise.


Questions? Need tips or advice on your App Virtualization or VDI deployment with your current architecture - contact me at: jmorain@yahoo.com