Sunday, November 29, 2009

Giving Thanks This Holiday Season!

We all have much to be thankful for this holiday season. As with any new season - it is a time for reflection on what has past and hope for a bright future on what is to come. The dismal economic climate can often cloud even the most up beat enthusiasts on the future. Now is the time for change to embrace a new paradigm in desktop computing.

The hardest part for anyone embarking upon deploying virtualization around the desktop or up the stack to the application - is ensuring that one has the right set of skills in place to understand all the requirements.

One of the biggest realizations in terms of skills is remembering our past mistakes. We may find that many of the skills we need are there - either within our lessons learned from deploying physical applications, server virtualization basic principles, or within our network base of peers.

Many of the basic requirements for deploying a virtual application have been solved and identified with deploying physical applications in distributed or server hosted environments. Simply put - service desk will still be critical, as will asset tracking, change management, and having the ability to audit all layers of the stack - regardless of whether the environment is physical or virtual.

One way to identify the required skills sets is to understand what is currently being done today (that sets the minimum bar) and ensure whatever the tool that is being evaluated can meet those minimum requirements from a people, processes and technology perspective.

Thursday, October 29, 2009

Impact of Virtualization & Cloud on License Compliance

The proverbial virtualization train has left the station - yet many software vendors & customers alike are still scrambling on to understand the impact on their current technology, licensing models, and processes. Like many major paradigm shifts - customers are moving forward and carving out what they believe to be the right pathway based on limited information and their interpretation of where this market is headed based on decisions from major technology vendors such as Microsoft, Oracle, and SAP.

Unfortunately for most customers there are no true best practices across software vendors for supporting virtualization. As consumers you need to be aware of what the pitfalls are, precautions you can take to avoid them, and ways you can leverage your existing tools and processes to reduce not only the costs but impact of virtualization to your organization.

Considerations to Address
  1. What Delivered - there are many different types of virtualization that can be leveraged such as Server, Desktop, or Application. What you are delivering will impact how you count and license the product. Is it an open source application, custom homegrown application, regulated and restricted access, or an expensive off the shelf application such as Adobe Photoshop. Whether the application is a desktop application, server application or combination of the two - Web 2.0 - makes a difference to cost structures and tracking.

  2. How Delivered - For example - is it a server application running inside a virtual machine, a virtual application launched off a USB stick or file media share, or a combination of virtual applications with a virtual desktop from a datacenter, or a virtual application delivered from the Cloud or Managed Service Provider. All can have license impacts depending on the software vendors support policies. Different software vendors have different rules depending on delivery: Concurrent desktops in Datacenter (VDI/HVD), Virtual Applications from a Client Device, or Streaming from the Cloud all typically have different caveats. For example, Microsoft requires an additional Services Provider License Agreement to distribute their applications from a cloud environment to customers. There are many unanswered questions that have come up regarding traditional delivery of virtual applications - if I stage it - does that count as a license? Do virtual applications (not installed) count against a EULA that claims it has to be installed? One rule of thumb - if you use it, you should expect to pay for it - Software Usage becomes even more critical in the virtual world.

  3. How Discover & Audit - Virtualization can have significant impact on existing tools and process for Audit & Control of applications.

    -If you are using application virtualization - does the provider provide transparency into the virtual bubble? Does the virtual application have digital rights management to prevent copying from client to next? How do you detect a virtual application that isn't registered? What hooks are available to ensure there are no invisibility cloaks hiding applications that can call back to ISVs but are undetected by company?

    -When you check out the type 1 hypervisor - will your traditional tools be able to know that the license on the user endpoint is the same one under the agreement with the hosted virtual desktop?
    If you vary your update schedule for discovery - how do you audit the virtual desktop? What happens if the user never logs in during the appropriate window? What is the impact on audit trail for tracking who touched what pieces? How will the discovery tool input and discern between licenses on the different virtual machines? Particularly - the personal VM and company approved VM?

    Server - When you dynamically move one virtual machine to another host - will the discovery tool know to not double count the application? Will the software vendor support the flavor of server virtualization being used? What level of support will be provided? How is it licensed compared to traditional licensing when server farms may have a cluster of more powerful boxes with multiple CPUs
    to support capacity on demand in the cloud (private or off premise).

  4. What is Impact on Performance - Oracle and many other major vendors provide prescriptive guidance on running certain applications in a virtual environment due to performance. There is no one perfect rule of thumb on virtualization and performance but there are some things to consider. Regardless of the type of virtualization - they all run on hardware of some type and are all affected by the traditional layers in the stack from network, to I/O, CPU, SAN/NAS, etc. The more layers you add to the stack will eliminate some problems but are still bound by the underlying hardware. When selecting the right type of virtualization - it is critical to understand what that is, where it will be run from, and impact on capacity requirements for individual users. There are tools out there from BMC - Capacity Management Essentials and Novell - Platespin acquisition- that can assist here.

  5. What is impact on Security - If using Type 1 hypervisor approach - who is responsible for patching the personal VM and ensure there are no Distributed Denial of Service Attacks on the company network? What are the implications of regulations on this approach - Cyber Security Act, Personal Information Acts? For application virtualization - what measures are put in place to prevent viruses from executing from the virtual registry on systems that the users have Administrative rights to like their home PC, employee owned machines, or as required to support legacy applications that can not be virtualized? Is the right transparency there for virtual applications to detect if there is a virus in the virtual registry? Do they employ anti-injection techniques to prevent malware from impacting the virtual environment?
Like any paradigm shift - the benefits of virtualization and cloud computing far outweigh the risks and effort required to bring nascent markets and technology to mainstream but it will take time. The most important thing for customers and vendors both is to be informed and understand what the implications are, where adjustments need to be made and make decisions based on assessed impact. Typically I always advise customers to crawl, walk and then run when it comes to adopting new paradigms (this is not just new technology) that will impact the overall ecosystem in place around people, processes, and technology. An ounce of prevention is truly worth 100 pounds of cure when you consider how dependant we have all become on technology.

Thursday, August 27, 2009

Cloud or Not Cloud - That is the Question - When it comes to Compliance

Defining the Cloud - What is and isn't
How do you measure something that is dynamic in nature? What is the impact on SAS70 Audit controls? Where does an organization even begin to start when the definition of what is and is not a Cloud is still up for debate.

Some that would like to claim expertise in this new paradigm - claim that SAAS or Virtualizing your Server Infrastructure automatically equates to cloud computing. That claim only shows a lack of experience in SAAS and technologies that enable the dynamic nature of the cloud like virtualization. The Cloud is a nascent paradigm that should not be confused with Software As A Service or Content Providers. Companies have been distributing content whether it be software, music, games etc over the Internet from hosting providers since the mid-90s. What makes a cloud unique is the dynamic nature and benefits of capacity on demand to scale to meet the peaks and valleys of a business as they grow.

NIST has tried to loosely define the cloud and different types of clouds that are possible. Their definition can be found: NIST Definition of Cloud. This is significant because now auditors are really going to be taking a hard look at what is in the cloud and how far do they go.

What does it mean to Compliance & Control?
"You can't control what you can't measure" - is a befitting statement recently made by Scott Alderidge of IPServices. This statement is backed up from other industry research reports from reputable institutions such as IT Process Institute series of Virtualization Maturity Studies. There is a GAP between the industry hype and realistic customer requirements.

Key findings indicate that many companies jumped into using newer technologies to enable dynamic provisioning of servers, applications, and desktops only to find that they had to either pretend that everything was "physical as usual", revert to not using those features, or put in control measures on a limited subset (ie - there goes capacity on demand across the grid).

This is not to say that Clouds are not possible for regulated environments or to achieve compliance for key regulations like HIPAA, PCI, SOX, etc but it does mean that some creative thinking has to come into play to enable companies to leverage what makes sense in the cloud without compromising compliance (regulatory, security, or business directives).

BTW - a big pet peeve of mine that occurs all the time by bloggers and vendors alike is that your solution or the cloud can achieve HIPAA compliance. Sorry folks - the system has to be reviewed as HIPAA compliant (same goes for SOX, etc.). Now there are pieces of the system that can be validated and submitted to enable the customer to achieve HIPAA compliance but no magic software, infrastructure etc can do the trick.

Start Where You Are
How does anyone know what is safe, not safe or where to begin in an area that can elevate so many pains faced by companies today (rising power costs, running out of capacity in the data center, need for centralization of data)? The problems that need to be solved are not entirely new and many companies have solved them much sooner than this. The key thing is to take a step back to look at the forest through the trees and create a game plan for migration.

For example, although there are quite a bit of regulated applications - what about the ones that aren't? Are there specific ones that can be "tested" in a cloud or hosted outside the DMZ for greater access? Is there a specific business application that has particular peaks and valleys on certain components of the application like the web server, file share, etc but requires protected user data or information such as patient records?

Customers have successfully implemented hybrid clouds - keeping what is needed in the data center but moving many of the pieces that have greater peaks and valleys to a cloud hosted infrastructure provider like Amazon AWS. For example, GDS achieved HIPAA compliance (yes GDS did not Amazon click on link for Amazon Case Study).

What did they do? They stored protected data such as patient information and records behind lock and key within the hospital data center but leverage the "Cloud" to deliver virtualized applications (HTTP/Encryption - for Config Assurance) that run locally, pull resources through the cloud provided by AWS, assemble the small subset of records typically needed by a user at the time and re-parse it back. This Genius architecture was developed not by some theorists professing to want to define the cloud - but by who it should. An expert in health care, hosting, and the requirements both have for regulations, users, and technology.

Friday, July 3, 2009

The Evolution of the Cloud

Beyond the Cloud Hype
The "cloud" may be new to some but for many of us that have been in the systems management space - today's cloud is just an evolution of innovative distribution approaches that have been around for nearly a decade - companies like Electronic Arts, Music Match, and Intuit for example all have delivered some form of service and/or content over the interent leveraging a scalable backend infrastructure.

What's Different?
Technology has evolved to help shift the focus to more of a user driven paradigm. Key usability capabilities that exist in virtualization (application independance), dynamic provisioning (reduces latency), enhanced capacity (for servers, networks, and clients alike), regulations (HIPAA, PCI, SOX) and more sophisticated users are driving a change in the paradigm from IT to User Focused. While the devil may be in the details the fact is companies are forced to learn balance and how to cut costs in this new era.

Why Now?
Forester recently bucked traditional thinking with their statement that contrary to general belief many large enterprises are looking to deploy in the cloud. This comes as no surprise for those of us that have been in this space for quite awhile. Server consolidation was fueled by many CIOs realizing that rising power and space was quickly becoming a significant concern for the datacenter. Server virtualization made complete sense given that many reported having 10% utilization or less. The costs to maintain, power, and cool the datacenter were more than the actual loss of the servers and costs of deploying server virtualization technology.

Infrastructure as a Service (IAAS- such as Amazon EC2) and Platforms as a Service make more sense now then ever before for 2 reasons:

1) Hard to justify buying more hardware for growth in a down economy - a perfect example of this - a large MSP opted to leverage an IAAS solution in lieu of purchasing, building, and maintaining a new data center to support their growing customer base. By combining newer technologies such as virtualization with IAAS provider they significantly cut their costs and increased their overall margins.

2) Companies are becoming more dependant on technology. Enterprises know that they will need to think about how they scale and/or contract in these turbulent times where companies are either merging, acquiring or laying off to weather the storm. It is too hard to plan and justify when one could select a solution that is fairly low costs to provide the same service.

External and Hybrid clouds are panning out to be more than just technology looking for a solution but as a cost effective way for the Enterprise to shrink and grow their costs with tide of demand for additional applications and resources.

Friday, June 12, 2009

Enlightened - Virtual Reality

Many write about the myths, facts, and fiction of virtualization. Some espouse that it is a revolution that is sure to take over the current desktop and server paradigm. This week I was blessed to spend time getting a good solid dose of reality from the only view that really counts - the architects and engineers that use technology every day to solve real world problems.

As vendors we can learn far more by spending a couple of days with key users of products to determine what the next best steps are, where the market is really going and what matters most to the ones that use our products and sign the checks. In this hardened economy - it is time that we start to listen more and hype less.

Virtualization is a tool like other technology that will add benefit and unplanned complexity to current processes, systems, and workers. It is not until technologist solve real world solutions that the paradigm will really start to shift.

Routes to Virtual Reality

1) Start with a problem - like a problem application that has compatability issues, needs to support a legacy version of .Net or Java, etc. From the problem - determine which virtualization applies (Server, Desktop, or Application)

2) Cut the the Chase - Understand EXACTLY what is being sold. There are many different types of architectures and solutions that are often overshadowed by marketing fluff. Know the different types, pros and cons of each approach, true costs and then decide.

For example there are 3 different distinct application virtualization architectures:
  • Agent Based - Agent connected directly into the OS kernal
  • Individual Bubble Base - Agent embedded into the virtual application
  • Hybrid - Virtual Agent that lives in memory and manages the virtual bubbles

3) Don't believe the hype - there is a lot of misinformation because of the "hype" around virtualization, cloud computing and the market in general.

  • Application virtualization is NOT running an application inside a virtual machine. It IS isolating the application from the underly OS just as machine virtualization isolates the OS from the Hardware.
  • Desktops and Servers are vastly different. Servers are many users to a single system while desktops are single users to single applications. Each have unique requirements and require a different approach.
  • Evolution not Revolution. This is not the time to support the rip and replace approach. The physical tools, paradigms etc will be alive and kicking for quite some time - customers want a single pane of glass - not multiple agents, interfaces, and added complexity that will increase the work load of already overstretched IT Staff.
  • Hybrid is the ONLY way to go - Hardware & Network can't dictate business continuity- Desktop users are highly mobile and will have little patience or time to deal with large downloads, increased network costs, or not being able to do their job due to technology failure. User based targetting is key to addressing the mounting challenges, regulations and risks facing IT today.

Sunday, May 17, 2009

The 4 C's of Universal Clients - in or out of the Cloud

From a Human Factors approach - the new paradigm shift both in and out of the cloud is more user centric around Universal Clients for the desktops. The monolithic era of tightly coupled OS, Applications and Data can no longer survive and thrive in today's technology dependant world.

Let us not forget Vista and why although many of us have either worked with or for large organizations that wasted significant man hours and investment planning to migrate - the actual adoption of the platform was delayed and/or rolled back. Why? Many cite application compatibility, usability, and impact to business continuity. ALL are factors for ease of use. Perhaps if the definition is more around the 4 C's of universal clients (Client, Continuity, Compliance, Control) it may be less generic and more easily defined in terms of context, content, and user. Another big factor not mentioned in these threads but that is of grave concern is compliance to security, regulatory and business directives particularly when acts are being passed like in Massachusetts that call for encryption during transport etc for individuals within their state and other acts that indicate you must adhere to state laws - see attached.

The 4 C's defined (in or out of the cloud )- but can easily be applied here are

  1. Client - Mobile, Ubiquitous, Easily Access Apps & Data that follow end user
  2. Continuity - Enable business continuity and up time - provide disaster recovery, least impactful to end user and there business (reboots costs businesses millions in lost productivity)
  3. Compliance - Adhere to key directives for regulatory (COBIT, SAS70, ISO), security, and business directives. Includes everything from patching, limiting execution, ownership.
  4. Control - Systems need to be locked down for IT, Easily managed, accessed for range Admins (SME-Enterprise),Encrypted, and Flexible for end users to still to their job.

    We all know everything is relative and there are good points to be made in this thread - but let's not loose sight that no two clouds will be exactly alike or even usage - what is required for an external cloud in Healthcare around medical billing may be different for Imaging, etc based on the context in which the user is trying to perform their function and the criticality of their role. If someome makes a mistake or are delayed in getting someone's bill out that is a minor annoyance but the later could be life or death. Opera tickets are entertainment and although valid in the context in which presented - does not fully reflect the magnitude of how the cloud can help or significantly impact a business.


From: Miha Ahronovitz To: cloud-computing@googlegroups.comSent: Sunday, May 17, 2009 9:50:00 AMSubject: [ Cloud Computing ] Re: I still don't fully understand why "ease of use" is a criteria of cloud
> I should put "cheap" into the cloud definition as well, because if it is expensive, then people will not use it.
Cheap , like "ease of use" is in the eyes of the beholder. A ticket to the opera costing $100, is expensive if I am a penyless student.
A gala of $ 1,000 is very cheap, if I have a net worth of $10M.
My father said: "Expensive" it is not how much it costs, but how much money you have".
If you want to make everything "cheap", just make more money.
Both "the ease of use" and the "affordability" should be laser pointed to the users from your business plan.
Everett point is a good point.
From: Raul Palacios To: cloud-computing@googlegroups.comSent: Sunday, May 17, 2009 12:43:34 AMSubject: [ Cloud Computing ] Re: I still don't fully understand why "ease of use" is a criteria of cloud
I agreetipical MS mantraeasy ... is a word that should be used that often ....
From: Ricky Ho
Sent: Thursday, May 14, 2009 11:39 AM
Subject: [ Cloud Computing ] Re: I still don't fully understand why "ease of use" is a criteria of cloud
By applying your argument, I should put "cheap" into the cloud definition as well, because if it is expensive, then people will not use it.1) you are mixing "desirable characteristics" with "definitive criteria".2) there are other motivations that you have ignore. I may use something that is very difficult to use if it provides high value to me.3) "ease" is a subjective measurement. Something that is difficult to me may be very easy to you.

Friday, May 15, 2009

Universal Clients – Have Lift Off In the EXTERNAL Cloud With InstallFree

Universal Clients – Have Lift Off In the EXTERNAL Cloud With InstallFree
There are days that I think – I must be dreaming – but realize today – Universal Clients are a reality for InstallFree providers like GDS and their customers not just in the traditional sense but in the Cloud. This week was a productive week in Seattle with pivotal, explosive growth across many sectors.
How one may ask – can someone take highly regulated applications and host them in an external cloud? InstallFree provides unique capability for 2 factor GRC that fits nicely with HIPAA. Unlike other physical and virtual packages – IF provides a unique set of capability that enables many ONLY features that address critical control GAPs – making compliance in the cloud – and therefore Universal Clients a reality today – not tomorrow. The unique approach does not require additional hardware, OS changes or hypervisors in the mix.
Modularity & Security are all well thought through beyond any other desktop paradigm in the market. Yes I am biased as the VP of Business Development – but then again – that is why I am here – truly superior technology. The secret sauce is in dynamic binding down to the machine & user level. Now applications that once had to be repackaged multiple times with complex pre/post install scripts, targeting, overhead – can be reduced to a single package. Configurations can be restricted based on policy and bound at run time to make the most impossible case seem utterly simple.
What’s the bid deal? After over a decade of working with the top Fortune 1000 companies in this space a product comes along that finally gets it right. For example – A doctor with a clinic, home office for on call, and affiliated hospitals – only needs a single app to comply with HIPAA. Because of this revolutionary approach – IT can set policies that restrict what the doctor can do on his Home PC to read only, on his clinic PC to full copy, paste, and print within the confines of the environment, and to the nurse’s station based on local resources to avoid fines for printing to the wrong printer. Without requiring additional technology to make it happen other than a read only view to Active Directory…
Imagine – cutting the 3 applications used today down to 1. No extra pre/post install scripts, linking, sequencing or complex procedures just pure simple file copy. Simple enough that even the technophobe can leverage the easy to use IF Management Console without having to know how to script, link, sequence, etc… Easy as 1 2 3…
This is just the tip of the iceberg – built in Digital Rights Management, Encryption (apps & data), 2 factor discovery for “truly virtual apps” that plugs into current reporting paradigms without risks or writing to the registry, and shell integration for seamless experience – WOW.
Why care? EMR is just around the corner. New laws around privacy and encryption are under way with the Security Czar – the monolithic world, packaging, and interlocking principles will no longer suffice in the new age of Governance Risk and Compliance.
Not to mention versatility without impact on application richness (due to poor server graphics processors and remote displays) or delivery mode – online, offline or in the cloud.

Wednesday, May 13, 2009

In the Clouds

Travelling from coast to coast in the “clouds” really started to think about knowledge workers in the context of clouds. Fat pipes, adoption of clean processes, etc have lead to pretty predictable user stories for “connected” users working within a cloud – but what about the road warrior (Doctor, Lawyer, Poll Climber, UPS Truck Driver, Sales rep, or CEO)?
Managing always connected users is not a new feat – many solved it back in the day with mainframes and dumb terminals. Pulling the unmanaged PC into the mix of the managed PC is nirvana for many companies. How can you lock down a user while still providing enough flexibility to support them while they are “disconnected”.

Network access from a virtual CAFÉ or even datacard is not a guarantee that the user will have access to backend environments. Issues with VPNs, Authentication, Network Latecny and general access issues can rear their ugly head at the most opportune time (before the demo, big movie presentation, during an exam).
There are many approaches that can be taken such as Hybrid application & desktop virtualization (such as InstallFree) that enables checking applications out. Some ideas to extend the deployment is to leverage virtual clients.

Saturday, May 2, 2009

Misconceptions about compliance and the cloud

From the thread - there is a lot of time and thought on specific projects that were going through that the "auditors" may not have informed those on the thread of all the pieces and some of the industry wide misperceptions from vendors that did not bother to take the time to educate themselves on the acts, NIST, etc have propagated. As a result- there are some misperceptions on compliance, how it can be hosted in the cloud, and the consequences.

The types of compliance and their requirements vary. The thread below is mixing HIPAA, SOX, etc. That is only applicable for public companies that deal with patient information (Insurance, Hospitals, Device Manufacturers). Different industries are impacted by different types of regulations (Financial services for example has Office of Thrift Supervision, SOX, Graham Leach Bliley, Basel I & II, PCI, etc) Healthcare also is overseen by the FDA because hospitals manufacture blood for example.

Outsourcers such as Perot, CSC, IBM, Accenture, Unisys, etc have had solutions around various verticals that are highly regulated after the legislation passed(Government, Financial Services, and Healthcare - HIPAA and SOX). SAS70 is the audit control for those smaller SMBs/SMEs that most hosted solution providers provide to audit and to the companies they serve to prove that data is encrypted, isolated and safe. This is a practice that has matured over the years and there are many good documented "How to Guides" - - for Visible Ops series. I am copying one of the co-authors and a formidable expert in this area - in case he would like to comment.

Yes CXOs need visibility into their organization to comply with SOX - that is ONLY for public companies. For example, large private healthcares - do not have to worry about SOX. HIPAA is different as is PCI because they affect anyone in contact with personal information (health, financial). HIPAA and other Personal Health Information Acts in Europe, Japan (which are more stringent) addresses access to patient information (health, billing, etc). Depending on the PHI Act (such as Europe) some require that it be hosted in the country of origin, others are less stringent requiring that they be encrypted, access controlled, etc. The outsourcer will need to provide SAS70 findings from an independant audit body of which the CXO needs to review. The CXO will not go to jail but will more than likely move to a different MSP if the government finds material discrepancies. They have time to clean them up particularly if it is something that resulted based on process or technology issue versus blatant fraud as what happened in the Enron case that brought about SOX.

One suggestion would be to actually read the regulations you are speaking about - see attachment for SOX. It is not the regulations that require reform (many of them were generically written - not to a specific technology per se) but the prescriptive guideline controls such as COBIT (used by auditors to test the technical system) and frameworks like ITIL and ISO that do need to be adjusted. That is not up to the politicians but the government commissions from NIST in the US - similar agencies in other countries to define and enhance. New standards are forming and being added to ITIL (look at V3 that changed from V2 to add a DML - definitive media library over a DSL - definitive software library and more around federation) - why? Because the technology evolved and changed.

The biggest GAP here for the cloud is how newer technologies - like virtualization - impact those controls making it difficult to enforce some and others obsolete. It is important to understand the risks of these new technologies for GRC (governance, risk and compliance) and either find perscriptive work arounds or select technologies that were created post regulations (after 2004) so that compliance and how it evolved with NIST will have a greater chance to being baked in as part of the architecture and not an afterthought until it is an issue.

It is not visibility as is stated - else the large outsourcers that have made a successful business off of healthcare verticals - would not still be in business. More importantly most small doctor's office etc are less than 100 employees - they could not afford a big datacenter etc for compliance and need to look at alternative means like the cloud.

The key here is to join groups like W3C that are defining Common Information Model or others that influence NIST direction, ITIL or COBIT reform (the majority uses ITIL framework or ISO).

Have a great weekend.



From: Rao Dronamraju To: cloud-computing@googlegroups.comSent: Saturday, May 2, 2009 9:05:16 AMSubject: [ Cloud Computing ] Re: Clouds and Compliance
“The problem here, I believe. is one of verification. If the CXO is 100% guaranteed and convinced that the ISP solution is compliant then he will have no problem outsourcing. Remember he has to believe his own IT people and their system being compliant. Can the ISP convince him that their system is "SAME" as the internal system? There lies the problem.”
No, the problem in cloud scenario is, CONTROL and VISIBILITY….on his/her own premise, he has a LOT of CONTROL and VISIBILITY. He/She is directly responsible for the CONSEQUENCES of anything going wrong in terms of compliance. In cloud scenario, that responsibility has PARTIALLY shifted to the CSP. The CXO is still responsible for the content and authenticity of the financial information.

I am not sure why lawyers would be interested in fixing this?....The stake holders here are the companies, CSPs and the government….they are the ones who are most benefited by clouds.
Ofcourse, the lawyers employed by them will work out the legal issues.

Would the govt. by itself look into this?....don’t know….

Your example of toy manufacturing and compliance is a good example to convince the CXOs that outsourcing compliance is in practice and working.

“NIH has research grants to come with solutions that allows for increased compliance. I hope if the solution is very difficult then HIPPA requirements may have to be changed. It will take time.”

Government can wait….they don’t run on making profits….for businesses TIME IS PROFITS….they cannot wait….they have to take the initiative and leadership and make things happen.

From: [mailto: ] On Behalf Of satish regeSent: Saturday, May 02, 2009 10:15 AMTo: cloud-computing@googlegroups.comSubject: [ Cloud Computing ] Re: Clouds and Compliance

I feel that the lawyers will NEVER do it is too strong. It aint going to happen is stonger. I belive they didn't know that the problem exists. It may take time for them to recognize the problem and then come up with regulations to solve it. Law has always been behind the technology development. So how long it will take then i the question?Note exchanging health records electronically and compliance with HIPPA is a big problem. The present government is making progress to overcome that by trying to seamlessly move the records from Pentagon to Veterans Affairs. NIH has research grants to come with solutions that allows for increased compliance. I hope if the solution is very difficult then HIPPA requirements may have to be changed. It will take time."
Today I know an ISP who has an excellent compliance solution and good market, is willing to try the SaaS model.

But when I did the analysis, I realized that unless the law is changed, CXOs are not going to come forward and place their compliance systems in a public cloud as long as they have the 100% of the compliance responsibility is with them….so this company just yet does not have the SaaS market….may be in 6 to 12 months…."
The problem here, I believe. is one of verification. If the CXO is 100% guaranteed and convinced that the ISP solution is compliant then he will have no problem outsourcing. Remember he has to believe his own IT people and their system being compliant. Can the ISP convince him that their system is "SAME" as the internal system? There lies the problem.
Let us take a simple problem. Toys sold in US have to be compliant with certain safety standards. Mattel outsources the manufacturing to China and takes the responsibility of compliance with US laws. (They did have problem with a particular toy recently and the product was recalled.) Also, I do understand, the requirements on toys safety are not as complex as the problem we are discussing.So the question is can we build software systems that are compliant with complex law and guarantee their behavior? We all have our own opinions and experiences with regards to software verification technology. It also has a long way to go.-satish
On Fri, May 1, 2009 at 11:52 PM, Rao Dronamraju <> wrote:
“Who wants to sign up and work with the lawyers so the regulations can be modified to the technical opportunities? Willing them to change isn't going to happen.”


Today I know an ISP who has an excellent compliance solution and good market, is willing to try the SaaS model.

But when I did the analysis, I realized that unless the law is changed, CXOs are not going to come forward and place their compliance systems in a public cloud as long as they have the 100% of the compliance responsibility is with them….so this company just yet does not have the SaaS market….may be in 6 to 12 months….

If someone knows of a case where a corporation has gone ahead and using a SaaS compliance solution in the public cloud please let me know….I am very interested in learning their business case including the legal case….

From: [] On Behalf Of brian cinqueSent: Friday, May 01, 2009 7:29 PM
Subject: [ Cloud Computing ] Re: Clouds and Compliance

Whats interesting about your comment on the lawyer community must change - reality that is not going to happen. Each region; geographic, national, or local has their own laws. I am referring to Germany laws are far more strict then that of Australia ; while Massachusetts privacy laws are far more strict about privacy then say Iowa . Who changes? Is Iowa going to adopt MA laws? or is Iowa going to create a local Safe Hard bridge to say Germany ? Sadly the reality is no. The question of Privacy remains and which privacy laws must I adher to? All of them? Some of them? Target markets? Amazon has a European Cloud but is that a stop gap or a reality of compromises between the clouds? Also securing your data (inflight or at rest) is not a governance/compliance get out of jail card. When companies say they are SAS-70 2; great but will that hold up in Uraguy courts (probably not). So what is the answer? Well right now each "Cloud" contract is being treated as an outsourcing contract. Will that scale? Time will tell but in the meantime if Cloud expands then being a contract lawyer is the place to be. But question I have for the vendors who are bridging mulitple cloud access methods via multiple IaaS providors. and providing a service. How will those contracts be structured? The question I have is - does it matter where your data is? The answer is yes but I had hopes that the Privacy Group meeting in Madrid - October 09; would create an attempt at general standards which in turn would allow for cross border clouds. Not sure the url is right now but if someone wants to find the conference url please do. From memory the agenda is scaled back and getting agreement on a global standards will have to wait for another year. Which means the governance question will remain for another year. Will the lack of Cloud Standards also remain as well?More and more I think about it. The regulators that we say must change are lawyers by trade. We are technical folks demanding change to open the true potential of cloud but are constricted by the ambiguity and fear of terms like "reasonable". Who wants to sign up and work with the lawyers so the regulations can be modified to the technical opportunities? Willing them to change isn't going to happen. Brian
On Fri, May 1, 2009 at 4:12 PM, satish rege <> wrote:
The main difficulty with compliance of a law, that you are so concerned about, is that the laws are made with knowledge of the previous technology and they may not be suitable for a new one that flourishes. In general the new technology cannot provide all the advantages if it has to meet the old law. Thus there is a chicken and egg problem which I feel the lawyer community has to solve. That is to make laws with technology change in mind. Perhaps the new administration, with its technology savviness, will try to look into this age old problem.-satish

On Fri, May 1, 2009 at 12:34 PM, dave corley <> wrote:
Sounds like an opportunity for a Storage Brokerage as a Service Provider and local storage product (NAS and SAN) vendors.Storage Brokerage as a Service Provider - host EMC Atmos or similar storage brokerage software. Brokerage maintains enterprise-specific storage policy and SLAs. Brokerage also specifies target repositories for stored information based upon metadata contained within file/information. If super-collossal-critical-SOX-compliance data is required to be produced for audit, policy adjusted for associated information classified through metadata as "compliancy-important" as follows:1. Primary backup to local store (premise NAS for small business, premise SAN for enterprise, mattress for consumer). Keep the family jewels and photos of the kids so 2. Secondary backup to storage repository SP "A".3. Tertiary backup to storage repository SP "B"4. Encrypt all data AES256 prior to all backups5. Establish policy/process, train your IT folks/VARs responsible for processes. If this data is so important, assign a "custodian" responsible for maintaining information metadata. Heck, most companies do this kind of item 'marking' for inventory control. 6. Data integrity monitor frequency - every X days7. Data loss reporting - within Y hours.Other less expensive/expansive policy applied to less critical information.Additional policies to allow storage arbitrage - if Wells Fargo's storage repository rates drop, substitute them as SP "A" and drop "Fred's MattressInTheCloud". Tiered/layered security/Defense in depth - not just a military concept. Disclaimer: I have never worked for EMC, SP "A", SP "B" or Fred's MattressInTheCloud.Dave

On Fri, May 1, 2009 at 12:50 PM, Rao Dronamraju <> wrote:

The Compliance landscape of Clouds looks VERY MURKY.

The fundamental problem is the Criminal Penalties associated with non-compliance although Civil Penalties are also equally troublesome.

For instance, Sarbanes Oxley says, the CXOs are responsible for the integrity of the financial information and also the integrity of the controls in place.

Not only they have to signoff on the integrity of both, external auditors have to attest to the authenticity and integrity.

So if and when enterprises plan to move to public clouds, there are some interesting situations one would run into.

If suppose there is non-compliance in the establishment, management and maintenance of the controls, who would be responsible?....

The CSP or the CXO of the enterprise?....

Similarly, if the integrity of the financial information is breached, who is responsible?....

Remember there are criminal penalties involved not just civil penalties?....

Can any of these be fixed with SLAs?....probably the civil penalties but definitely not criminal penalties. I do not think the law would allow a CSP to go to prison in place of a CXO.

May be some legal expert in the group can speak to it.

So the interesting problem here is, how would you distribute the compliance responsibilities and liabilities associated with non-compliance between the CXOs and the CSPs?....

The only way seems to be through legislation. Unless the legislature changes the law in such way that the penalties are levied on the parties RESPONSIBLE for the integrity of the controls and the financial information. If the controls fail CSP goes to jail, if the financial information is fudged the CXO goes to jail.

How likely is this to happen?.....

How soon cloud this happen?....We all know how fast the legislature moves…..

The adoption and migration of enterprises to pubic clouds could depend a lot on this.

Other alternative is, do not move the compliance systems to the clouds at all…..until the legislature catches up with the technology.

Friday, May 1, 2009

Virtual Reality - Compliance, Desktops, and Cloud

This week a fellow blogger on the Cloud posed some interesting questions around compliance this week that highlighted this area is not very well understood when it comes to the cloud and virtualization - across desktops, apps, and to some degree servers.

Compliance is an interesting element in it's own right with many twists and turns depending on the industry (healthcare, financial services, manufacturing, etc), type of company, what technology is in place, whether it is actually used in a way that adhere's to COBIT and for outsourcing the controls the outsourcer has placed and if they adhere to pass a SAS70 Audit.

Yes - SOX does say that the CXO will go to jail if they do not adhere to proper controls and conform to the standards identified by NIST to do so. Truth be told very few have actually gone to jail although several companies (527 in the first year according to IT Governance Institute) have had material discrepancies - their CXOs have not seen much in the way of jail time. The real teeth around SOX is having to post in a public place like the Wall Street Journal and the impact on the stock etc is a much bigger driver. Companies typically have time to clean up their act and fix the material discrepancy. The actual act itself is very ambigious and doesn't actually define all the components but leaves that up to NIST and COBIT (not to mention additional flexibility for auditors) to deem whether a company is in compliance. It is the system, manual or automated - that enables compliance not technology.

Having said that who has ownership, how do you determine compliance for the cloud? Many of the compliance factors whether SOX, HIPAA, PCI, GLB, etc have been factored into MSP and outsourcing models and are part of SAS70 audit controls - at least for physical systems. Else companies like, Amazon, etc would have a difficult time maintaining their service given the sensitive data.

The real gap that needs to be thought of for the cloud is what newer technologies that enable the cloud - like virtualization do to traditional Controls used to maintain compliance and how the lack of understanding about those technologies - impact companies ability to deploy them fully. In my previous company - ITPI and I worked on research in this area across several different companies - interviewing CXOs to operations to really understand the GAPs.

We recorded an introductory webcast on this topic:

ITPI is targetted to release the overall study - Kurt Milne copied here can provide more insight on the details. I must say it is a real eye opener and a significant area that quite a bit of work needs to be done.

The real concern is around the standards such as COBIT, Common Information Model (CIM, Smash, Dash, etc ) are based off of the physical world and were created void of virtualization. DMTF is adding virtualization to CIM but there is still quite a bit to be done from a backend systems perspective around virtual apps, desktops, and servers to ensure maintaining compliance.

In some ways virtualization poses more risks to existing controls particularly around security and in other ways it makes possible new controls. The key is understanding what those risks are, the architecture - not all are created equal - ways to work around them, and what can be deployed versus what can not based on the application, oversight required, etc. Companies work around this today - so it is also possible in the Cloud.

They key here is while everyone is trying to define this new market - it is critical to understand the current physical paradigm, processes, controls and how we impact them before creating the solution. Clearly as with all new paradigms and markets - there is quite a bit for all of us to define, educate each other on and understand before jumping.