Wednesday, September 8, 2010

Citrix & Cisco UCS - Makes Perfect Sense

One of the best posts I have seen coming out of VMworld is from Harry Labanna - CTO of Citrix. What a few may not know is that prior to joining Citrix - Harry was a critical contributor to the overall desktop architecture of Goldman Sachs (nearly 80,000 desktops at one time). In starting - I must be the first to admit - as a long time Marimba customer - I have admired Goldman Sachs for their top notch Architecture and Innovation. They were one of the first to do a virtual desktop solution before it was the in thing to do.

His latest blog posts articulating the value of the Cisco & Citrix relationship is refreshing because it brings the customer experience and reality into the overall virtualization media hype. For those that have not read it yet -

Of course it has the "Vendors" glasses on - as we all would expect but to further Harry's point - Cisco UCS & Citrix combined with NetApp - do make perfect sense in order to bring Desktop Virtualization from a point solution to the mainstream. Why?

UCS brings a host of different technology (not just the network and the routers) but also key integrations with existing management frameworks (like BMC, HP, MS, others) to help simplify the transition from legacy hardware/software to virtual environments and the cloud.

UCS was designed around Policy Orchestration, Templates, and ITIL - key success factors in Systems Management (and BSM) that the biggest and best datacenters have recognized and adopted over the years.

NetApp has been a key VMware partner for years and has built up the credibility and IP around optimizing storage access and control not only in virtual server environments but also virtual desktop environments almost since their inception.

Starting with Presentation Server - Citrix has tried to solve the "problem" application issues for desktops delivered in the server realm for years.

Many Johnny-Come-Lately vendors speak of the revolution, suggesting rip and replace existing systems without understanding the true implications from a customer perspective or in some cases without understanding what it takes to build out and maintain the infrastructure needed to manage a large number of distributed endpoints. Not just in regards to technology but also the overall business impact.

Why is this important?
  • Customers don't have a greenfield - they have legacy systems management, hardware, OS, and apps that in some cases they can not easily swap out. But where they can - it makes sense to look at the whole solution (Network, Storage, Desktop Experience). Their existing systems help them report on and prove compliance (HIPAA, PCI, SOX, etc). Whatever they add needs to work with what they have today versus where they will be eventually in a few years.

  • Desktops have a bigger impact on the business - Many companies are starting with niche deployments because they can not afford a minute (hour, week) of down time across their entire call center, work group, or other key individuals that rely on the system to sustain their primary job function. This will be even more prevalent as EMR and other regulatory requirements around technology are forced to become more mainstream. For example - A doctor with out patient history (meds, ailments) is like a fish out of water. Same rings true for a Marketer without Power Point, Lawyer without case law/contracts, etc. It is even bigger for the Small to Medium Business Owner obtaining services from the Cloud without on site IT.

  • Rising Energy Costs & Impact on Datacenter - Many companies I have worked with over the years moved to server virtualization because they were running out of power in the datacenter. Energy costs have continued to climb (particularly in areas like Phoenix that have a high number of data centers) and in the down economy - many IT shops can not justify adding another POP or expanding power consumption. Remember IT usually is not their primary business but a means to do business...

    The key take away that I saw in this article is that they are looking at it from a different perspective (the one that counts) the customers..

Monday, August 23, 2010

Client Virtualization - Hard Look at ROI

Client Virtualization at first blush can have compelling impact on ROI when you view it through the eyes of the vendor. The real truth lies within the details and overall impact on your install base. When trying to determine what the true ROI and/or TCO is from both a CAPEX and OPEX perspective it is best to understand the total impact of the solution selected.

How do you realistically calculate TCO or ROI?
People often ask me - do I start with CAPEX or OPEX. The real answer lies within both. Depending on the type of virtualization being implemented for clients you will want to look at the entire lifecycle of the client, application and overall business directives. Remember - that sometimes TCO/ROI is not enough to build a case. This particularly rings true when for example the implementation would have a significant impact on end users ability to perform their job function (Road Warriors, Doctors, Teachers) and their level of connectivity.

Watch Out for Shifting Costs
Virtual Desktops and Applications do have a significant value in certain situations to aid with compliance, reduce application lifecycle costs, and eliminate down time. However there are additional costs that are added that must be considered when building the case to determine if a particular solution or architecture is right for your business.

Each vendor will provide a nice TCO or ROI calculator based on what is "known" today for a typical desktop deployment. However, virtualization of Desktops and/or Applications is anything but typical. It adds additional overhead and complexity that must be added to the calculations such as:

  • Reduces Systems Management Ability to Diff (Byte level) updates of applications - increases application data load on networks. This varies per Virtual Application and Systems Management Vendor - needs to be included in your selection test.

  • Increases Storage Requirements both in the data center and on the endpoint - this requires additional hard drive/NAS/SAN, etc and computing processing power. For example, prior to Application or Client Virtualization the user typically only had a single copy of the OS or Application on the endpoint. Now they can have multiple copies of the OS, different versions of the same application, and/or programming framework (.Net or JVM). This in turn will also increase storage requirements in the Data Center for storing those multiple copies & impact on network for download, patch and update.

  • Increases Management Overhead in other areas - while Virtualization decreases some of the areas of management overhead (Packaging/repackaging) and Test - it does increase the complexity of the overall management overhead. Why? Because prior you had only a single application to patch, update, inventory and manage on a single OS. Now there are multiple OS for single users and multiple apps. Each application will need the same level of care for Patch, Update, Inventory and Management in order to ensure compliance with regulatory, business, and security directives. Part of the ROI should be calculating what the maximum number of applications and/or OS you will support in your client environment and what the costs (with new virtualization factors) will really be.

  • Increases Operational Expenses in Other Areas: Before the line between server and desktop was very clear and well drawn with the exception of Citrix Presentation Server (Now XenApp). With the introduction of Desktop Virtualization - many companies are coming to realize that they will need more seasoned experts in the troubleshooting cycle to assist the help desk (solution centers). These individuals have to be Virtual Host experts and be able to determine what server originated what version of what applications and OS to troubleshoot individual issues, audit application access, and understand total impact. Network, Database, Server Virtualization Experts etc will all need to be part of level 2 support (not just escalations any more) and/or Service Desk will need more of these experts. This in turn will drive up Operational Costs.

  • Impact to End User Productivity: Depending on the application this one can have mixed results. It is critical to understand who the target users are and what the overall impact this type of technology will have on their job function prior to deciding to virtualize their desktops or applications. For example, their is a big push in Healthcare to provide Clinical Desktops for Physicians, Nurses, and RTs as either virtual desktops or applications. This has had mixed success depending on the implementation and stability of technology. For high bandwidth, high throughput scenarios - it typically works great until the network or electricity is down and/or the Physician tries to access the application remotely from a clinic or home office that has low bandwidth. There needs to be back up procedures and access built into the equation for critical applications as part of the overall DR plan for each user. AND the costs of down time needs to be calculated not just from an hourly dollar amount but overall business impact (liability, customer care, and employee satisfaction). Don't forget - it was the Users not IT that killed the Vista deployments due to overall impact on their job performance...
There is more to reviewing the overall ROI/TCO than what you can extract from a vendors calculator. Remember - they are not going to build in any factors that do not reflect their solution in anything but a positive light (they want to get your business). It is up to YOU the customer to determine what the hidden factors are and calculate them into the overall equation. I have seen customers that depending on their business model - have elected to only move a subset to virtualization and/or selected just a component once they realized that the traditional model was still less expensive from People, Processes, and Technology perspective (for both CAPEX and OPEX).

Jeanne Morain

Tuesday, July 20, 2010

Malware Attacking SCADA Systems - from USB Device

A really interesting article that I think we should all be aware of -Microsoft Investigating Windows Zero Day Trojan brings to light an even bigger threat to our overall ecosystem and economy from Cyber Terrorism.

For those that may not be aware of the importance of SCADA systems - you may want to recall the brown out a few years ago that took out the electrical grid from Ohio to New York. Many do not know that it was believed to be caused by a virus that was infecting the reporting system. These systems power nuclear plants, electrical grids, oil pipelines, etc.

This article brings to light very clearly that as a Global economy we have to think about the technologies we put in place and their impact. These types of viruses should not only be a concern for USB devices on SCADA systems but also those embarking on their Journey into client virtualization.

Why worry? Virtualization exponentially increases the threat of security risks to companies and our underlying infrastructure. How? VM sprawl and undetected/unregistered virtual applications that have security holes in their virtual operating systems. While SCADA systems are pretty locked down - if a USB device can communicate with the rootkit of the underlying operating system what about virtual operating systems that can go undetected by traditional inventory programs?

For VMs in the wild - they may not have inventory installed or be accessible on the client systems (not like VSphere in the datacenter) when the VMs are offline. Application virtualization poses an even greater threat here.

Typically inventory searches the registry for key elements that identify there is an application installed and Patch Management tools will apply the patch to the underlying OS. But if the OS is virtual unless it is specifically integrated or programmed to do so - the traditional tools will not see the virtual OS or be able to patch it. If the person using the virtual application has administrative rights to their machine - then the virus can continue to exploit the vulnerability within the virtual operating system and pass through to the underlying PC.

What are ways around this?
  1. Lock down the PC - disallow administrative rights. This is hard to do of course for some organizations as many legacy applications still require administrative rights to function.

  2. Register Virtual Application - ensure the virtual application allows you to register it with the underlying Operating system (For example with ThinApp they use ThinReg). Do not use technology from vendors that do not provide some mechanism for alerting the physical system that the application is there.

  3. Ask you Inventory & Patch Management Vendors if they support that application type - some vendors do have integration with traditional tools such as SCCM, or BMC. Tools like BMC Bladelogic for Clients (Marimba) have the ability to provide inventory for applications deployed through their system. This is useful to at least provide base inventory when there is no clear out of the box integration. I would also recommend requesting support from the Systems Management Patch Vendors to provide some type of hook into these solutions to quickly patch them without repackaging. This last part is one of the biggest inhibitors to broad scale adoption of application virtualization beyond just a handful of applications.

  4. Create Process with Service Level Agreements to patch the Virtual OS - Many companies I have worked with over the years have set SLAs to quickly apply patches to their many computers out there. How do they do it across dozens of virtual applications? It depends on the architecture of the virtual application. Make sure you work with your Vendors Services team to create a Disaster Recovery plan for Zero Day viruses such as this to ensure the Virtual OS receive the same patches on a monthly basis as part of your overall patch process.

  5. Only run virtual applications in User Mode - When possible eliminate the administrative rights. Most of the SCADA systems are pretty locked down. What makes the USB trojan even more worrisome. Companies that are choosing to leverage application virtualization should take their overall imaging and rights management process to the next level. Now that you have technology that can lock down access rights - use it.

Some virtualization vendors will claim anti-injection etc. Which is great but you are only as strong as your weakest link. It is important to really think through the security ramifications prior to deploying virtualization technology (Virtual Machines or Applications) on clients. Make sure they fit into your existing SLAs and don't put your company at risk.

Jeanne Morain

Wednesday, July 14, 2010

Application Virtualization Journey - Begins with a SINGLE Step

Many of my favorite customers have asked me where to begin with application virtualization. A few have been very proud that they selected their first 15-20 applications to migrate over and now believe they are ready to go. My best advice to all of you is to STOP - take baby steps when approaching application virtualization.

Why? Because it is not that simple and creates as many issues as it solves (if not more). Don't start out with 15-20 applications - pick 1 or 2 and their dependent applications. Look for those that have the highest ROI for the company to enable you to do the following:

1) Build your Business Case - Applications like CRM, Custom Applications that can't be migrated over to new OS, etc make a perfect test case. They are typically complex, with many dependent pieces and have lots of calls to the underlying Operating System. Pick ones that have no other option and a low user population - don't ever start with Outlook or Office. Remember -similar to Business Service Management - if you try to swallow a fish whole - you WILL choke on it! Cut it up into bite pieces, get rid of the bones and unnecessary elements and you can have a delicacy to be savored.

2) Identify Risks to the Business - With anything there are risks - that rings especially true for NEW technology. The earlier a technology is in their release cycle the more stability, performance and defects will be uncovered. Lessons learned only come from the experience of discovering what you didn't already know or assume before. The application (and dependent components) should enable you to deploy a complete cycle to not only test your ROI assumptions but to discover hidden costs and risks. Hint: Network, Performance, Disk Space, Integrations Required with existing tools, etc should all come up during your initial pilot. Some risks are minor - others can be significant depending on implementation architecture and route taken. More to come on this topic....

3) Formulate Routes to Value (requirements) - Similar to Business Service Management - how the solution is architected really depends on the objective one has in mind. With Application Virtualization there are several possibilities - the requirements should vary based on the end goal. For example, are you deploying Application Virtualization to reduce storage capacity for your Virtual Desktop Infrastructure? OR Are you deploying it to migrate a legacy application to another OS? OR Are you deploying Application virtualization to reduce system dependencies for your Cloud/SaaS implementation of a single application? OR Is it to reduce the footprint of your Citrix Server Farm?

The architecture that works best for each scenario will vary depending on the end objective. An agent based tool such as AppV may work better for OS Migration due to contracts already in place with Microsoft, while an agentless tool such as VMware's ThinApp would be the best solution for Cloud/SaaS deployment (nothing to install on endpoint), etc

4) Solve a REAL Problem - Although Vendors have interesting opinions - they are not as relevant as customers. Why? Because YOU are the ones that have to put your job on the line to deploy their technology. YOU know your environment and pain points far better than ANY vendor. Having deployed to millions of endpoints - I can safely say that no two environments are EXACTLY the same (although there are similarities). Customers never ceased to amaze me with how they used technology to solve problems it was never intended to (why? Because vendors didn't even realize it existed). Don't buy the Hype cycle around Migrating to Windows 7 or other events that will drive you to bite off more than you can chew at the given moment. Yes - Application Virtualization can help you migrate (don't get me wrong) but just because you CAN doesn't mean you Should. If you are not ready yet (Educating your Workforce, Development Team, Understanding Risks & Rewards, etc) - it is better to step back and take a test run before you bite off more than you can chew.

Real Problems I have seen that are compelling are DLL Hell (Finally isolating those badly behaving applications), Reducing the Footprint of your Citrix Farm, Reducing Reboot time and time needed for back outs on 24x7 facilities (such as call centers), enabling Test/Production on the same machines for longer beta cycles, and many others.

5) Educate your Company/Team - Pick application(s) that will enable you to educate your user population on the value they will gain before you leap. When I say User I mean all users (includes IT, Support Staff, End User, Executive Team, etc). By having a small pilot with no more than 500 users - you can quickly understand what types of questions virtualization will bring that you did not anticipate (FAQs and Training needed for the masses), You can also determine the impact to current reporting and provisioning tools, License compliance for Regulatory or Software usage (are there new tools or reports that are needed), Service Level Agreements (if there is a patch for a security hole, reducing trouble ticket turn around, reducing call volume into help desk - good for ROI too), and last but not least your end users.

Each week I will try to share Lessons Learned along the way - from my experience - to enable customes to drive vendors and the market to evolve. Similar to Business Service Management, Server Virtualization, and other new markets - Application Virtualization is needed and has a compelling ROI - the People, Processes and Technology all need to evolve for the real benefit to be realized and mass deployments to occur. Today - there is not one solution that has it all - so proceed with caution and select the right one for your company.


Friday, July 9, 2010

Multiple Versions if IE Not Supported - What it means for Application Virtualization

Lack of Support for Running Multiple IEs Impact on Application Virtualization

One of the benefits that Application Virtualization provides is enabling customers to migrate legacy applications across operating systems without impacting the end user or incurring significant costs in testing and rewriting the application to be compatible with the new version of the OS. This has become particularly important for those that believe XP Mode (lack of interaction between applications on the new OS) will not be sufficient enough to enable Win 7 Migrations (as many skipped moving to Vista).

Microsoft has stated they will not support multiple versions of Internet Explorer on the Same OS - particularly when used with application virtualization. -

What does that Mean for Application Virtualization?
The actual magnitude of impact really depends on the architecture that the solution uses. There are several different architectural approaches to application virtualization. Depending on the approach - this could be a significant risk for customers beyond the intentional virtualization of Internet Explorer.

There are essentially 4 types of architectures that exist currently in this market.

1) File redirection - The files are redirected to a different portion of the OS but are still technically installed on the machine.

2) Agent Based Virtualization (Agent installed in OS) - agent is installed in the Operating System and redirects calls to isolated applications. The agent needs to be sequenced etc to determine how much memory consumption and other system resources to allocate based on precedence.

3) Agentless Virtualization - The file system and code to run the application is embedded within the virtual application. The Virtual Operating System is contained within the Application to provide everything the application needs to run independant of a full OS (registry keys, specific components) and communicates with underlying OS.

4) Virtual Client/Agent Hybrid - The fourth architecture is based off of a virtual client that leverages some form of file system and manages all the components independently. It combines the approach of having the agent (without installing it in the OS) and the virtual OS.

How does this translate to impacting product support?
For most application virtualization solutions it literally means that only 1 copy (ideally the one that is shipped as part of the OS) is supported. We all know this is not realistic or possible particularly when newer versions of IE may break mission critical applications.

One would either have to uninstall the version of IE that comes with the OS and use only a single version of the virtual IE across platforms. There is still quite a bit of benefit in this approach. The biggest benefit is reducing costs across migrations from one OS to the other but also being able to support multiple OS with the same version of IE without having to do a significant amoutn of regression testing. It would be equivalent to what is done in most IT shops today for physically installed IE - say when IE 7 came out but many still used 6.

Now it does also lend the ability to run a side by side pilot of a beta version. Meaning that for a small group during the pilot phase - although there would not be broad based support chances are the benefits of having existing pilot users have access to the current version while the new version is being tested is more valuable and less risky then trying to not allow them access. However, the key thing here is nothing installed means nothing to back out and less corruption issues with the base OS.

Architectural Risk
Although the architectures that fall in the 3 & 4 category (Agentless and Hybrid) have the biggest benefit to customers in terms of portability and reduced risks from not having anything installed on the endpoint - there are some significant risks and considerations that come into play with this approach given Microsoft's support policy.

For products like ThinApp that have written their own virtual operating system - the risks are far less. The reason being is according to the written statement is 1 version of IE per operating system. Although I am not a lawyer - having done EULAs in Product Management for a little under 15 years - I do know there is some leaway in language. Microsoft specific support policy states only 1 version of IE per OS. Now given a product like ThinApp that has it's own virtual OS one could say that the virtual IE is running on it's own OS (VOS). The likelihood of having significant impact on the underlying OS would not be as great depending on how the package is created to interacted with the base OS. Remember - for critical applications it is in YOUR best interest as the customer to check with your OS vendor on what their policy is - I can not speak for either Microsoft or VMware on this one.

Then what is the big deal? Hidden Risks
Certain Application Virtualization solutions actually use IE as their virtual file system in lieu of writing their own. That means that with each application - one is running an instance of IE on the base OS. The support policy would extend beyond just intentionally virtualizing IE to migrate to a new OS but would apply to all applications that are being virtualized.

What should a customer do?
Ask the vendors you are considering about their architectural approach so you can make an informed decision as to whether or not this is an issue for your organization. ie)Is it a critical application that has key MS components or is manufactured by MS (like Outlook or Office) that you must have support for or is it a home grown solution based off of Java or some other component that does not require any support from Microsoft.

Either way it is better to understand the pros and cons of ALL the architectural approaches prior to deciding which application virtualization solution to use. Although a few are close - there is not one single solution that has actually achieved the Nirvana that many customers asked for in order to obtain the true Universal Client.

Leap with caution as the technology and market matures to understand what should and shouldn't be done versus can/can't. Remember just because you can do something - doen't mean you should. I have seen some questions on applicability of Application Virtualization - I am a firm believer that it is required to unchain applications from the OS and achieve the True Universal Client. But like all technologies just needs time.

Stay Tuned - next week regarding Implementation Considerations and Hurdles when deploying Application Virtualization within the Enterprise.


Questions? Need tips or advice on your App Virtualization or VDI deployment with your current architecture - contact me at:

Sunday, June 20, 2010

Happy Father's Day!

In the midst of the Hype cycle we often loose sight on what is really important in life. Today is a day for all of us to give thanks for all of the Father's in our lives.

My father is long since passed away. My biggest regret was not taking the time out from my busy schedule to have dinner with him. I had a meeting come up while and had to fly off to another critical business trip. I pushed our dinner to the next trip I would have in town a couple of month's later. Unfortunately, he died 2 weeks before in a strange golf cart accident.

No matter how critical or important we think what we are doing is - always ask yourself if it is worth it? Worth the risk or not taking the time for those that mean the most. Never take them for granted because you never know when they will not be there.

Special Happy Father's Day for all of the Father's out there that have made a difference in a child's life and have sacrificed time away from their families to work on projects or roll outs of products with me over the years.

Over the next few weeks stay tuned for developing changes and tips and tricks on planning your application virtualization deployment from lesson learned.

Next Post: Proceed with caution: Application & Desktop Virtualization
Insight on tips and tricks as not to break your current systems management infrastructure or end up taking out network nodes, impacting performance of clients, and more...

Tuesday, February 23, 2010

Listening to Customers: Where the Rubber Meets the Road

Moving Beyond Vision
Vision is great but it only gets a company so far. The true testament of success is the ability of the company to not only create the vision and product but to succinctly execute. A wise VC once told me that a man and a product does not make a company make. The testament of a true company is one that can 1) create a vision of product, 2) refine it to meet succinct customer needs, and 3) execute from inception to deployment.

Those 3 steps are a lot harder than they appear. The weakest link that I have seen over my career that determines the success or failure of a product in the market is the ability of the company to listen to their customers. Although the opinion of the CEO, CTO, Engineers, Marketing, and Sales are interesting - they are not as relevant as that of the customer that uses the products in production to solve real world problems. Here is a list of my top 10 lessons learned implementing application virtualization.

Top 10 Deployment Considerations: Application Virtualization
Application Virtualization is a departure from the norm typically on how most Enterprise solutions are packaged and deployed. Communicating and planning based on what you know regarding the application life cycle is critical to both the customer and the company.

Key Questions to Ask:

1) Target Application Dependencies?: are there any dependencies with physically installed applications on the endpoint? If so what are those applications? Should or can they also be virtualized? What will the potential impact be? Always good to get a list and/or dependency mapping of all applications.

2) Why is the customer migrating the application to a virtual paradigm? The typical responses are either Application Compatibility issues, OS Migration requirements, Implement Software As A Service in a Cloud, Offshore support, Reduce Terminal Server Footprint or reduce life cycle overhead. How and what you architect and implement will vary depending on what the ultimate goal of the customer is. How they will measure the success or ROI of your product within their environment.

3)Compatibility with Target OS?: Not all Application Virtualization can simply be migrated to a newer version of the OS. Some require additional repackaging of the application to move to the new version. If OS Migration is a key reason - it is important to see if the applications are already virtualized and to make sure that you are working with the version of the Application Virtualization solution that is compatible with the target OS.

4) Who are critical People, Processes and Technology that will be impacted? It is important to identify all the stakeholders during a production roll out, educate them on what application virtualization is, the purpose of the deployment, and what the expected impact will be to them or their organization. Typically I suggest training a SWAT team initially of the key stake holders so there are less issues around communication and misunderstanding because it is a departure from the norm.

5) What is the Plan from Inception to Maintenance? The road to hell is paved with good intentions. Key to vet out and plan for not only the knowns but add time for the unknown factors that will come up.

6) What is the impact on Current Solutions, Processes, and Systems? Such as can internal products used for testing, deployment, troubleshooting etc work with the virtual application? If not what are the contingency plans for this new way of packaging applications? Does the vendor supply a virtual reg edit for example? How will current processes for deployments, change orders, and asset tracking be impacted? Any special integrations needed with existing tools such as Discovery, CMDB, or Delivery mechanisms?

7) What is the CUSTOMER'S Starting Point? Every customer and environment is unique. It is critical to understand what the customer's understanding is of Application Virtualization, educate them on the different approaches and work with them to take baby steps to implementing a solution so they can adjust along the way. This last one is particularly critical because too often People don't know what they don't know. It is better to start with a smaller pilot, identify GAPs in technology, training, and processes - have them addressed and then continue.

8) How critical is/are the applications being virtualized? I once had a Architect ask me the impact of using virtualization in the emergency room of a hospital and the best way to recover. My answer was not to use virtualization for that purpose as the technology in general is still in early stages. When it comes to life or death - always proceed with caution when deciding whether or not to give new technology a go. The more critical the application the smaller the steps that should be taken and more planning required to cover back out plans in the event something goes wrong.

9) Does the proposed architecture meet hardware requirements? One of the key reasons many people did not migrate to Vista was the hardware tax. Meaning the overhead would exceed the capacity of their system requirements. When a customer is proposing to deploying multiple versions side by side on a machine - Disk Consumption, Port Conflicts, Network Capacity, I/O, and other hardware related questions should be considered as part of the equation. Understand what the overhead is going to be on a per application basis to architect a realistic solution. Just because you theoretically can deploy multiple versions of the same application doesn't mean existing hardware can support it when this exponentially grows as more applications are virtualized.

10) What is the communication strategy? Meaning people are busy with their day to day distractions of their job - it is important to set aside time to clearly create the plan, touch point calls to ensure execution, and take time to evaluate overall plan to adjust if needed. This allows everyone to set the right expectations that are achievable and realistic.

Some of this may sound like simple project management - but one would be surprised by how many times key items like compatibility with current systems, regulatory requirements, or simple lack of communication cause deployments to fail.


Saturday, January 16, 2010

Top 10 Virtualization Predictions for 2010

Happy 2010!

Wow another year has passed before we knew it. What does 2010 have in store? How much of it is hype from Johnny come lately vendors trying to jump on the money train and how much of it will actually amount to products that make a difference - is yet to be determined. Here is my best guess of things to come for the next year:

1) Cloud - will continue to be the marketing "Hype" word. Every Web 2.0, IT As A Service, Virtualization Platform, Systems Management Company etc will continue to jump on the "Cloud" bandwagon to get their piece of the pie. This will continue to muddy the waters and confuse IT and Senior executives while they try to figure out what is really a cloud and what is not. This will delay actual adoption and/or fuel more pilots (similar to VDI) to enable IT to figure out best strategy, impact, and additional tools required to drive more efficiency and less costs during their implementation cycles.

2) Compliance - will play a much bigger role in driving new product innovation and budgets - once again catching the naysayer off guard as it did with SOX and having more vendors jump on that bandwagon. We now have a Cybersecurity Czar, new provisions for Health Care, pending deadlines for Electronic Medical Records, and Auditors asking for more details on how and what tools are available to check impact of virtualization. This is a big area that really needs more thought leadership, standards and catch up a like.

3) War between Physical and Virtual will continue to heat up - who will win the war between the big paradigm shift? The current physical tools in place or the virtualization only tools. The answer here is simple - the hybrid approach. Customers will push back on attempts to virtualize ALL their desktops, servers, systems, and tools. They will force vendors to have a single pane of glass to manage both physical and virtual paradigms. Those that provide the bridge between the physical and virtual paradigm across the stack will win the war.

4) Win 7 Migration Planning - less deploying until 2nd half of this year and into 2011. Most large enterprise customers I have worked with over the years take a minimum of 18 months to migrate to a new OS. Many are just cutting their teeth on Win 7 and trying to determine what is viable and what is not in terms of the biggest factors that inhibited Vista adoption - Application Compatibility, Hardware Requirements, and impact on end users (business continuity). They are once bit and twice shy with Vista although they know they have to migrate because many skipped Vista and XP is on it's way out.

5) 2010 is the Year of the Desktop - over the last 3-5 years the desktop has taken a back seat in terms of budgets, hype cycle and innovation. Many vendors tried to apply server technology to the desktop to extend their reach into the proverbial pocketbook of the Enterprise but have fueled internal debates and concern. Desktop Managers, Architects, and Dependent groups are pushing back while creating their own evaluations and new paradigms will emerge as a result. They have successfully shown through failed pilots, business cases, etc that solutions which solve server issues can not be easily used to solve desktop issues as well.

6) Financial Institutions will still see Cloudy market -Many of the revenues they enjoyed in 2009 will diminish based on clamping down by government with new taxes being levied on the financial services industry to cover the recent bailout combined with more foreclosures from the last wave of interest only loans coming due in 2010 and 2011, and the high unemployment rate. This market will continue to be uncertain and executives will continue to proceed with caution with the exception of projects that enable more visibility (Compliance and Analytics), costs reduction initiatives such as consolidating data centers or staff to less expensive markets (salary, land, taxes).

7) Compliance and Compatibility will drive adoption of alternative solutions such as Application Virtualization, Web 2.0, and Virtual Desktops. The number of pilots and niche adoptions for virtual applications, converting applications to Web 2.0 (similar to, and niche deployments of virtual desktops will increase as companies try to determine the most cost effective approach to balancing increased demand for mobility (home office, global), regulations, and they are forced to migrate to Win 7 or an alternative.

8) Software as a Service and IT as a Service will heat up in Healthcare, Education, and Government - Regulations and Budget cuts across the board are pushing C Level executives to rethink the way they do business. Smaller doctors offices, clinics and hospitals will scramble for low cost alternatives that enable using User Based provisioning from a hosted model versus per seat license count to reduce costs, support overhead, and impact on not complying. Education will follow suit to ensure Privacy Act provisions are in place as more displaced workers return to school and more emphasis and actual fines are being placed on violations of privacy. Budget strapped state and local governments will look for ways to drive efficiency and process to deal with their staffing shortages and shortfall in general. Virtualization and BSM will prove to be viable solutions.

9) Consolidation will continue in overall Systems Management Space - interesting moves this month with HP and Microsoft partnership (should not overshadow VMware/HP Partnership). More sleeping giants like Dell, BMC and CA with their larger partners like, Oracle, and Cisco will up their game through enhanced partnerships, being acquired, and/or acquiring newer technologies to refresh their portfolio to combat the race for the Cloud, Virtual Desktops, and Service Management tools as the tornado continues.

10) High Growth for Process Engineering and Technical Services for various forms of virtualization, cloud, and communications. As more larger companies jump on the SaaS, Cloud, and virtualization bandwagon there will be a greater need to work with "experts" that can enable IT and C Level Executives define not only the best way to implement these technologies but also what will be needed from a process and people (new skill set) perspective. These technologies are still fairly nascent and have impacted or changed the way that many things are tracked, deployed and maintained. Companies have invested millions in creating processes, tools, and audit trails around traditional systems. They will look to see how they can reap the savings rewards for newer technologies without having to rebuild their entire ecosystem, have duplicate systems, or stretch an already stretched out team any further. More expertise will be needed to assist with reducing inter departmental friction through process re-engineering, vendor evaluations, and implementing from pilot to production.

2010 will be an exciting departure and similar to when BSM first started a significant year of growth for many vendors (small and big a like) and seeing who will emerge as leaders in this area will be very exciting.