lunes, 31 de diciembre de 2012

Los números de 2012: 68+1 posts

 
Este blog tuvo 4.000 visitas en 2012. En 2012 hubo 68 artículos nuevos, a los que si sumamos este serán 69 … comprenderéis que no me pudiera resistir ;-)
Las visitas vinieron de un total de 62 países (en esto va a ser difícil  conseguir 7 países más en tan solo un día).
Esto ha sido solo un extracto, si alguien tuviera interés en el informe completo que haga click aquí.

NOTA: El informe ha sido preparado automáticamente por “Los duendes de las estadísticas de WordPress.com” (pero sirven para este blog, pues ya sabéis qeu son duplicados) sobre el año 2012 de este blog.

miércoles, 26 de diciembre de 2012

Energetic Efficiency in Data Centers and Walhalla

As I mentioned in my previous post, in this Christmas period, when besides I’m just in the middle between of having presented a proposal (the last December th 4th) to EU FP7-SMARTCITIES-2013 (titled: SMART DC FED “A SmartGrid & DCIM based Approach for a Sustainable DC Federation” addressed to “Objective ICT-2013.6.2 Data Centres in an energy-efficient and environmentally friendly Internet”) and the preparation of a new one whose deadline is January the 15th, I am not in the mood at all for stealing more time to my family than the just necessary. However, I been mentioned in a new published by “Computer World”. Just mentioned, no more, but for ordinary people as I am that fact is not usual, so I’m proud of it. That new is about an event where I was invited to speech. The meeting (titled “Innovación Abierta en Eficiencia Energética”, i.e. Open Innovation in Energy Efficiency) was organized by enerTIC (an Spanish association for fostering energy efficiency in IT) and the CDTI (the Spanish Public Entity entitled for fostering and funding RD activities). I was asked for speaking about TISSAT’s R&D projects for improving energy efficiency of our “DataCentres Federation”, (Note: a “Federation” is a set of trusted and high speed linked DC, usually owned by the same entity”).

In my speech I outlined our motivation and the different projects we are carrying out (or just finished) ant that they are driven by and converged in the construction of Walhalla, a DC certified as Tier I by The Uptime Institute and that it operates as PUE=1,15 thanks to the combination of multiples techniques, methods, processes and technologies:
  • Auto-generation.
  • Tri-generation energy production by gas engines in the first phase (and by fuel cell in the second phase).
  • Free-cooling.
  • Heat confinement.
  • Residual and unused energy reuse
  • Overhead Distribution of power, data and climate.
  • Integration between ITC management systems and energy infrastructure DCIM).
  • Designed for Cloud Computing Services
And where new efficiency metrics are been measured
  • Of course, PUE (Power Usage Effectiveness)
  • WUE (Water Usage Effectiveness)
  • CUE (Carbon Usage Effectiveness)
  • GEC (Green Energy Coefficient)
  • ERF (Energy Reuse Factor)
  • Other from the points of view of Life Cycle Sustainability Assessment for Greenhouse Gas (GHG) Emission
Coming back to our R&D related projects that let to the Walhalla improvements, they range:
  • From bare innovation processes projects either:
    • Improving energetic efficiency processes: ISO 50.001, EU Code of Conduct for Data Centers , ISO 14.001, and a pilot project with AENOR (Marca CEE CPD), etc.
    • Or improving operational efficiency: ISO 20.000, ISO 27.0001, ITIL best practices, etc.)
  • To research and development projects:
    • Directly addressing energetic efficiency of our DC:”Green DC” and “CDPVerde”,
    • Or addressing its operating efficiency: “Predicitve I2TM”, “DCIM”, etc.
    • Or the ones that look for both to improve energetic efficiency and to offer new advanced services: “RealCloud”,“CloudSpaces”, “StackSync”.
  • A complete relation of our R&D project could be found in http://www.tissat.es/en/rdi/projects:


  • Since copying is easy and comfortable, the remaining of this post (the one related with other speech in the event) is extracted from the new published in “Computer World”, but I’m sorry since because it is in Spanish:

    El Encuentro reunió tanto a proveedores de referencia, organismo públicos, universidad y centros de innovación como a usuarios corporativos con el objetivo de ser punto de encuentro y detectar posibles intereses comunes para desarrollar proyectos en colaboración. Como asistentes participaron expertos de compañías innovadoras como Grupo Santander, Correos, Endesa, Telefónica, BBVA, Renfe, Ferrovial y Ono, entre otras muchas.

    El evento lo inauguró con su intervención, D. Javier García Serrano, jefe del Departamento de Energía, Química, Medio Ambiente, Producción y Servicios del CDTI (Centro para el Desarrollo Tecnológico Industrial), se refirió al compromiso de su entidad con “los proyectos de I+D, individuales o en consorcio, como instrumento fundamental para fomentar la competitividad de las empresas, así como a las nuevas empresas de base tecnológica a través del programa NEOTEC y de la iniciativa público-privada INVIERTE ECONOMÍA SOSTENIBLE, puesta en marcha recientemente”. Sobre la eficiencia energética en concreto, destacó la relevancia del futuro programa Horizonte 2020 de la UE, que “contempla un presupuesto provisional de alrededor de 14.000 millones de euros para los retos sociales de energía segura, limpia y eficiente, y transporte inteligente, limpio e integrado”. Finalizó su intervención recordando que aún permanecen abiertas varias convocatorias relacionadas con la eficiencia energética, con un presupuesto de 26 millones de euros para la aplicación de las TIC a la movilidad cooperativa y 18 millones para Smart Grids. Asimismo, destacó la alta participación española en la convocatoria SMARTCITIES, cerrada recientemente.

    A continuación, D. Borja Izquierdo, representante del CDTI en el Comité de Programa de Energía del Departamento del Programa Marco, se refirió al European Innovation Partnership (EIP) sobre Smart Cities & Communities, que contribuirá al despliegue de soluciones innovadoras conjuntas en las áreas de TIC, transporte y energía para aumentar la eficiencia energética.

    La sesión plenaria finalizó con las intervenciones de D. Gabriel Cuervo, Innovation Project Manager en Ferrovial, y de D. Carlos Cebrián, Director de I+D+i en Tissat (un servidor). Ambos detallaron el compromiso de sus compañías con la eficiencia energética y la sostenibilidad en sus distintos ámbitos de actuación, así como su modelo de innovación abierta a la colaboración con organismos públicos, la industria y las universidades.

    Tras al sesión plenaria se sucedieron unas Mesas de trabajo, que tenían por objetivo fomentar la colaboración entre los asistentes en los distintos ámbitos de aplicación de las TIC en la eficiencia energética, como edificios inteligentes, centros de datos, infraestructuras TI, colaboración, utilities, eGovernment, cloud computing, vehículo eléctrico, movilidad y ciudades inteligentes. Cada mesa contó con la participación de un representante del CDTI para ayudar a resolver las dudas surgidas durante los debates y dar apoyo al moderador de enerTIC.

    Durante la jornada, enerTIC presentó una guía para la mejora de la eficiencia energética, que ofrece las principales tendencias y soluciones en la aplicación de las TIC para la eficiencia energética desde la perspectiva de expertos de la Administración española y europea, consultoras, analistas, representantes de la industria y usuarios corporativos. Incluye además referencias a los 100 principales proveedores de este tipo de soluciones TIC: uno de los cuales, por supuesto, es TISSAT.

    viernes, 21 de diciembre de 2012

    A funny video about OpenStack geeks

    In these previous Christmas days I’m not in the mood for writing a serious post, but I remember that perhaps TODAY is my (our) LAST opportunity to do it. So, let me say that we’re using OpenStack not only for making business but also for R&D projects as “RealCloud” and “CloudSpaces” that are partially funded by the European Union Commission’s Programme. So I’m a fan of OpenStack and that’s the reason of this funny new video about OpenStack geeks that I’ve just copied from http://www.dopenstack.com/:
     
    Who says geeks cannot dance? And sing, for that matter? They can. The enthusiasm post-the OpenStack Summit led to this rocking video “Cloud Anthem.” An excerpt from the lyrics goes like this:
    We’re gonna turn it up, we’re gonna rock your cloud
    This is the open future that we’re talking ’bout
    So baby listen close, you gonna feel this track
    Cause you know you gotta get this OpenStack

     
     
     

    By the way I wish you MERRY CHRISTMAS AND HAPPY NEW YEAR in this NOT-ENDED world.

    miércoles, 12 de diciembre de 2012

    Cloud SLAs: a review of current contracts

    Last week I spoke about Cloud SLAs (Service Level Agreements) from a technical point of view, and to back my opinion I cited Gartner’s analyst Lydia Leong’s post that states that Amazon Web Services (which Gartner recently named a market-leader in infrastructure as a service cloud computing, and I think everybody agrees) has the dubious status of “worst SLA of any major cloud provider” and that HP’s newly available public cloud service could be even worse. Let me remind that the main reason (but not the only one) for this statement is the strict requirements of service architecture forced by Amazon and HP. As stated, this isn’t the only reason: besides SLAs are also unnecessarily complex and limited in scope.

    Moreover in an amazing post titled “SLA feather allows you to fly in the cloud” another Gartner analyst, Jay Heiserm, uses the Disney’s cartoon Dumbo analogy, and he remind us that AN SLA IS NO MORE THAN AN EXPRESSION OF INTENT; IT IS NOT EVIDENCE OF DELIVERABILITY; in fact, an SLA from a public cloud service promising some sort of recoverability can be a crow feather, clutched in the trunk of the enterprise elephant, providing them the false courage to be willing to fly in the public cloud.
     
    Another Jay Heiserm’s post (“Bulletproof Contracts“) summarizes some contractual terms provided by a SaaS prominent vendor and that I copy below (including funny Jay’s comments):
    • We believe that we obey the law. If there are any questions pertaining to how your data is handled within our system, it is YOUR problem.
    • We won’t give your data to the police. Unless we do give it to the police.
    • When this contract is over, you may have the ability to get your data back, but that is YOUR problem, not ours.
    • If one of your customers contacts us, we won’t give them anything. Unless we are forced to give them something.
    • We will store the data in whatever country we want.
    • We might have third parties help us with this, and they of course would be held to the same weak levels of standard as we contractually obligate ourselves to follow.
    • You the customer are obligated to obey the law at all times, even if you have no idea what that may entail. (Guess what happens if there is a dispute with us and our lawyers can find some way to demonstrate that you didn’t completely follow the law.)
    • We will follow appropriate security measures—as understood by us.
    • We will back up your data at least once a week, we will review our procedures periodically, although this seems unnecessary, given that none of these procedures were knowingly designed to fail. If we have the slightest plan for testing our ability to recover, we are not sharing it with you and we hope that you won’t ask that question.
    • If any of our support personal ever accesses your data, by definition, it is necessary access.

    Finally, copying again but now from NetworkWorld, I analyze the contracts usually signed about SLA cloud, the terms they include, what is its impact, and how often they are present in the current cloud contracts.

    All these point are summarized in the next table (please note that I only have copied the NetworkWorld’s data in a table, so for a more accurate a big explanation you should read the NetworldWorld post):
    SLA contracts terms analysis
     
    Note: Encryption related clauses are not present because in my opinion currently they are either a new service itself, or a differentiating service feature.

    viernes, 7 de diciembre de 2012

    Cloud SLAs: a technical point of view


    Initially, when I started the first version of these comments, I was decided to speak about Cloud SLAs (Service Level Agreements), because I reminded the contribution of some friends of mine in the last ISACA conference I assisted about its importance (and in spite of that it was lightly treated in the conference discussions because of other subjects capitalize the attention) and ‘cause in my point of view they’re almost nil in the current Cloud contract, as a recent Gartner’s study states. In fact, it is a subject I faced in previous posts: some in English (“Real” Cloud Computing Services vs. “Fake” Cloud Computing Services), and some in Spanish (El Cloud Computing y la ISO-20.000 e ITIL: conteniendo el sobre-aprovisionamiento – Capacity Management).

    But, I don’t know how, a few lines after I was speaking about Cloud risks in general, and a few lines further, I was thinking of security concerns, but trying to don’t speak again only about privacy and regulations compliance (as in many discussion everybody only thinks), but also about data lost, data integrity, and so on, remember what CIA means for security: Confidentiality, Integrity and Availability, where everybody adds a second A for authenticity, in both senses, to avoid repudiation and to grant information access only to the correct persons. Then I reminded myself that although without doubts security is the most important Cloud risk, but not the only one and it is even losing relative importance against others; in fact, as the time goes by, new risks are becoming more important for the CSOs (Chief Security Officers) of big companies (75% of them “are confident in security of their data currently stored in the cloud”, according to a recent VmWare report) that are already using Cloud Services: portability, vendor lock-in, standardization, learning curve, and so on (see my Spanish post  ¿”Nubarrones” en la Nube? whose title means something like “Dark clouds over the Cloud?”), and we must not forget SLA incompliance, that was the idea I was trying to write about.
     
    And then, when I came back to the initial point, I decided to throw all the written so far in the trash (and the time spent), and to focus only in this subject and trying to not wide (I’ll treat the other subjects in next posts) because otherwise this is going to become confuse and difficult to understand, and to use the information that more clever minds have written. So, here I go:

    According to Gartner’s analyst Lydia Leong, Amazon Web Services (which Gartner recently named a market-leader in infrastructure as a service cloud computing, and I think everybody agrees) has the dubious status of “worst SLA of any major cloud provider”.

    However, HP’s newly available public cloud service could be even worse. By the way, HP launched the general availability of its HP Compute Cloud on Wednesday, December the 5th  (HP Cloud Compute is a pay-as-you-go Cloud IaaS service that the company first announced earlier this year as a beta program and now is generally available) and it’s based on OpenStack (my favorite open Cloud Platform, I’ll promise to show my reasons about in a future post and compare it against other open platform).
     
    Despite of AWS has voluntarily refunded customers impacted by major downtime events even when the SLA did not require it (AWS has experienced three major outages in the past two years), as I underlined it has been voluntary decision, not a consequence of signed SLA. In fact, Ms. Leong recommends investigating cyber risk insurance, which will protect cloud-based assets, because of the SLA requirements basically render the agreements useless. “Customers should expect that the likelihood of a meaningful giveback is basically nil”, she sayd. The main reason for this statement is the strict requirements of service architecture forced by Amazon and HP:
    • Both AWS and HP impose strict guidelines in how users must architect their cloud systems for the SLAs to apply in the case of service disruptions, leading to increased costs for users.
    • AWS’s, for example, requires customers to have their applications run across at least two availability zones (AZ), which are physically separate data centers that host the company’s cloud services. Both AZs must be unavailable for the SLA to kick in. Of course, that does imply bigger costs.
    • HP’s SLA, only applies if customers cannot access any AZs. That means customers have to potentially architect their applications to span three or more AZs, each one imposing additional costs on the business.
    However, this isn’t the only reason, others aspects of the SLAs contribute to void its effectiveness as a user’s control and defense: besides SLAs are also unnecessarily complex (“word salads”) and limited in scope. For example, both AWS and HP SLAs cover virtual machine instances, not block storage services, which are popular features used by enterprise customers. AWS’s most recent outage impacted its Elastic Block Storage (EBS) service specifically, which is not covered by the SLA: “If the storage isn’t available, it doesn’t matter if the virtual machine is happily up and running — it can’t do anything useful”.

    Next post I’ll come back on this subject and I analyze the contracts usually signed about SLA cloud, the terms they include, what is its impact, and how often they are present in the contracts.

    martes, 4 de diciembre de 2012

    SofCloudIt relies on Tissat as his partner of infrastructure services

    Let me be proud of copying this new:
     
    SofCloudIt has relied on Tissat, company specialized in integral outsourcing of mission-critical services, to consolidate its position in Latin America. SofCloudIt has more than 10 years of experience in online services, providing a platform for distribution and marketing of products and services in Cloud model. “SBC, Soft Business Cloud”, will now be hosted in Tissat’s Walhalla DataCenter due to the agreement reached by both companies. Thus, the two companies become partners to expand their business and meet new opportunities presented by the international market with a global offer in terms of infrastructure and services.

    This way, Tissat becomes SofCloudIT’s infrastructure partner and offers maximum security and availability housing in its DataCenter, Walhalla, a Tier IV certified by the Uptime Institute, along with its many services associated with this eco-efficient infrastructure generation. Both companies expand their portfolio of services being able to offer their customers a comprehensive range of service, solutions and infrastructure.
     
    “This partnership promotes the international expansion that a company like SofCloudIt pursues because right now our Datacenter is a unique reference at the European level; we have the maximum security certification, Tier IV, guaranteed by the Uptime Institute and it is commercially available. Furthermore, Tissat Datacenter is the only eco-efficient one because it has 30% less emissions than a conventional center and saves 40% of energy” said Manuel Escuin, General Manager of Tissat.
     
    Meanwhile, Jesus Angel Bravo, Managing Partner of SofCloudIt, adds “SofCloudIt presents itself as a company “Enabling Cloud Broker Solutions” for that Telcos, ISPs and VARs which can offer Cloud services to SMB. We needed a partner with the maximum level of service and we have found this in Tissat. With them we can engage our expansion in Europe (EMEA) and Latin America with warranties. The customers of our SBC Platform, that we have developed with our technology partner Parallels and his Parallels Automation solution, are companies like Telcos, ISPs and Integrators, all of them requiring a high level of SLA that we can now guarantee thanks to our partnership with Tissat and his Tier IV Datacenter infrastructure. Together we can be the perfect partner for these companies, so they incorporate our cloud business model oriented to “SMBs” market (small and medium enterprises).

    miércoles, 28 de noviembre de 2012

    An InfoGraphic that summarizes some of subjects covered in the ISACA Valencia Congress about Cloud

    Once again let me show an InfoGraphic copied from cloudtweaks.com that in some way summarizes some data that I believe every IT manager is thinking about currently (at least that’s my experience and I checked it in an ISACA Congress I participate last Friday, November the 23th).
    It’s based in a survey that RackSpace commissioned to McLaughing and Associates on April 2012. The interviewed were 500 IT decision makers working for entities (business or organizations) that use Cloud Computing.
    While the survey shows that 83% of them really believe that Cloud has done its organizations more effective, also 86% of them are worried about vendor lock-in. Other aspect that was covered in the Congress have also their data in the statistics: 60% are seriously concerned about Rogue IT, and so on …
     

    sábado, 24 de noviembre de 2012

    Reconocimiento a ISACA-Valencia por su VI Congreso, este año dedicado al Cloud (¿Hay vida laboral después de la nube?)

    Estos días ando muy, pero que muy cargado de trabajo (¿cómo me atrevo a quejarme de eso en los tiempos que corren?) a lo que se ha unido la conjunción de algunos problemillas personales que han hecho que tenga bastante abandonado este blog.

    Sin embargo dentro esta asfixia laboral, ayer (viernes) me dejé convencer para asistir al evento que organizaba el capítulo de ISACA en Valencia sobre “Riesgos en el Cloud” (dentro de su VI Congreso); y si lo hice fue tanto por que el año pasado pude asistir y me gustaron algunas ponencias, y además, por la buena labor de un gran profesional que me “lió” para participar en su mesa redonda.
     
    Y no me arrepentí, en absoluto, todo lo contrario, es más me arrepiento de no haber podido asistir a la sesión previa del jueves. Y, por eso, aunque me espera un largo día de trabajo quiero dedicar estos minutos alabar la sesión de ayer en todos sus aspectos. Creo que ha sido una de las mejores sesiones sobre Cloud que he asistido, con magníficos ponentes que hablaron sobre Cloud, y no sobre sus empresas u otros conceptos tecnológicos tan interesante y útiles como el Cloud Computing, pero que no son Cloud. Y, además en Valencia: un lujo. Y, también, una muestra de que para hacer un gran evento y reunir un buen conjunto de profesionales no hace falta contar con CEO, CIO, etc. de las grandes empresas españolas o de las multinacionales (que algunas veces acuden con presentaciones preparadas por ayudantes que demuestran ser poco duchos en el tema). Y eso no quiere decir que en este Congreso faltaran (pues entre los ponentes habían grandes referencias del sectores a los que representaban como INTECO, G.V., APEP, CUATRECASAS, MICHAEL PAGE, el propio ISACA, por citar solo algunos), sino que se hizo una sabia combinación de los mismos:

    D. Pedro García Ribot, Secretario Autonómico de Administración Pública, hizo una apertura escueta y distendida, pero centrada, como debe ser en una persona de su cargo.

    Me gustó la ponencia de Florencio Cano, CEO de SEINHE, tratando aspectos de seguridad informática del Cloud, a la que sucedió un aspecto totalmente distinto (inteligente combinación de sabores que dirían los gourmets): el enfoque de los perfiles tecnológicos desde la perspectiva de RRHH planteado de una forma amena y didáctica por Raúl Suárez, Director responsable de tecnología, Michael Page (al que en mi opinión le faltó algo de tiempo para poder tratar el tema de los nuevos perfiles tecnológicos que plantea el Cloud, lo que no resta brillantez a su presentación, sino que solo indica que el próximo año habrá que darle un poco más de tiempo).

    Tras el descanso intervino Eduard Chaveli, vicepresidente de APEP que con un bien ideado título de “¿Cloudicamos?” revisó la legislación que afecta a los aspectos de seguridad de la información en el Cloud Computing, y a los contratos que se deberían intentar negociar (antes de “cloudicar”) con los CSP (los proveedores de servicios Cloud).

    Le sucedió una mesa redonda donde, en mi poco objetiva opinión (pues formaba parte de ella), se mostraba un adecuado equilibrio de los distintos sectores: empezando por Pablo Pérez San-José (Gerente del Observatorio de la Seguridad de la Información del INTECO) muy solvente en sus intervenciones, Inmaculada González (de Cuatrecasas) muy versada como era de esperar en los temas jurídicos en los que era acompañada de Eduard quien nos hizo caer en la trampa haciendo suyas unas declaraciones de Sarcozy sobre la polémica propuesta proteccionista francesa para el entorno Cloud. Y me he dejado para el final al sector de los DataCenters, representado por Rafael García, Director Técnico de Nixval (un DataCenter valenciano) y excelente profesional de quien cualquier elogio que añada sería poco objetivo pues todos saben que es buen amigo mío, y un servidor que representaba a Tissat, en este momento la única empresa española (y una de las pocas de Europa) que ofrece servicios públicos con un DataCentre certificado como Tier IV (por The Uptime Institute), entre los que están incluidos auténticos servicios de Cloud.

    En fin tengo la impresión que me olvido de alguien o de algo muy importante, por ello me voy a permitir alterar los versos de Becquer:
     
    ¿Qué es dinamización?, dices mientras clavas en mi mente perezosa tus ágiles palabras.
    ¿Que es dinamización?, ¿y tú me lo preguntas?.
    Dinamización … eres tú: … D. Javier Peris Montesinos

    Nota: esta pequeña broma solo la entenderán quienes hayan asistido al congreso, pero sólo trata de loar el excelente trabajo de Javier como maestro de ceremonias del mismo

    viernes, 16 de noviembre de 2012

    Where is the Data Center Cloud traffic coming from?

    Just a few days ago Jimmy Daly publish a great InfoGraphic (supported by Cisco) about the Growth of the Cloud by 2015. It forecasts that in 2015 the traffic will 4 times the one in 2010., and if the cloud DataCentre traffic was the 11% in 2010, it’ll grow until the 34% in the 2015. Finally it shows the different sources of the traffic (e-mail, social networks, YouTube, ..), and it realizes that those sources represents only a 17% of the global data center traffic. The other 83% traffic comes from DC-to-DC traffic (7%) and … what’s surprising and funny for me: the remaining 76% of the traffic is Intra DataCenter traffic, I mean, traffic inside the same DataCenter.
      

    lunes, 22 de octubre de 2012

    Have a good laugh about Cloud Computing buzzword and related hyping


    Several time in previous posts I spoke about Cloud Computing as a buzzword and the hyping about. I tried to define what I call "real" cloud computing (following the NIST definition). These effects are even bigger when we the subject is how to move an application to the Cloud, i.e., how to make a SaaS either starting from scratch or from an existing (legacy?) application. This is an important matter of research and continuous advances, and I promise to face it up early in a post. However, today I sugest you to have a good laugh reading this Dilbert strip about this subject:


    miércoles, 10 de octubre de 2012

    2ª encuesta RealCloud ‘Análisis de la oferta y la demanda de los servicios Cloud Computing’ (20-oct)

     
    El Instituto Tecnológico Metalmecánico (AIMME) en cooperación con TISSAT, COREX, UJI y URV están llevando a cabo en el marco del PROYECTO REALCLOUD un estudio sobre la oferta y la demanda de los servicios de computación en la nube o cloud computing en las empresas con el objetivo de adaptar/desarrollar una oferta de servicios de Cloud Computing orientada a las necesidades reales de las pymes y autónomos en base a:
    • Conocer cuál es la percepción que las empresas tienen sobre el Cloud Computing, y cómo dicha percepción evoluciona a lo largo del tiempo en el periodo 2012-2013 mediante una encuesta semestral.
    • Identificar el uso actual del Cloud Computing en nuestro entorno, y prever su evolución.
    • Detectar qué barreras organizativas/culturales, tecnológicas y económicas debemos sortear o derribar entre todos para conseguir que las empresas empiecen a usar Cloud Computing de forma efectiva y obtengan beneficios reales con un retorno de la inversión claro.

    Los resultados de la primera encuesta llevada a cabo entre los meses de febrero a abril de 2012, con más de 250 empresas participantes, están publicados a través de las librerías digitales de Bubok y Amazon, en formato papel y PDF, con el título “Análisis de la oferta y la demanda de los servicios Cloud Computingy pueden obtenerse de forma gratuita participando en la siguiente oleada de encuesta.
     
     
    En dicho informe, a lo largo de 120 páginas [1] se realiza un riguroso análisis estadístico sobre las variables utilizadas en el cuestionario, con interesantes resultados tanto desde el punto de vista de la oferta como de la demanda cloud.
     
     
    Desde TISSAT animamos a participar en la siguiente edición de dicha encuesta a todo profesional, empresa u organismo interesado en la materia a través del enlace:
    http://encuesta.realcloudproject.com
     
    con las siguientes ventajas:
    • Recibirá un mensaje con los resultados de la primera encuesta de forma GRATUITA, y a su vez los de la segunda, una vez procesados los datos tras el cierre del plazo de la encuesta (20-oct-2012).
    • Recibirá un mensaje con próximos eventos (jornadas y/o cursos) relacionados con CLOUD COMPUTING, como muestra el curso siguiente:
    Curso ‘Tecnologías de virtualización y Cloud para empresas’ (6-nov)
    • Recibirá un mensaje con los enlaces de los videos y documentación de la Jornada REALCLOUD: ‘Aplicación del Cloud Computing en empresas’ (23-feb), con 130 empresas inscritas y 31 organismos/empresas colaboradoras en la difusión, como muestra el vídeo siguiente:
    Para más información:
     
    [1] INDICE:
    Prólogo: Subiendo los primeros peldaños hacia la nube pero con los ‘pies en el suelo’ y sin llegar a ser ‘gigantes con pies de barro’
    1. Introducción: ¿Tienen futuro los servicios Cloud europeos? ¿Y los españoles?
    2. Cloud Computing
    2.1. Conceptos básicos de Cloud Computing
    2.2. Beneficios del uso del Cloud Computing
    2.3. Desventajas del uso del Cloud Computing
    2.4. Objetivos del informe
    2.5. Presentación de la encuesta. Variables
    3. Análisis de datos
    3.1. Características generales
    3.2. Análisis unidimensional
    3.3. Análisis bidimensional
    3.4. Análisis con variables agregadas (calculadas)
    3.5. Análisis multivariante
    4. Conclusiones
    5. Referencias
     

    miércoles, 3 de octubre de 2012

    Cloud Computing and the EU Digital Agenda: A step in the right way, but too short

     

    A few days ago, once again a Santiago Bonet’s twit (@sbonet) warns me about recent declaration of Neelie Kroes, the EU Commissioner for Digital Agenda, about the new European Strategy for Cloud Computing. On September, the 25th, Ms. Kroes said “Today we launch a significant package of measures to build that trust and boost our economic future. Today we make Europe not just cloud-friendly: but cloud active. And we offer our economy a160 billion-euro boost”.

    I’ve already spoken about this subject in some previous posts (for example the post Europe behind the US on Cloud, or the recent one written in Spanish: ¿Tienen futuro los servicios Cloud europeos?, ¿y los españoles? ), and also in the prologue I wrote for the Spanish book “Análisis de la oferta y la demanda de los servicios Cloud Computing” (ISBN / EAN13:1478313854 / 9781478313854).

    I do not believe at all that Ms. Kroes read my blog, but fortunately she and her team have taken advice what other important and clever stakeholders opine about and also from the analysis the EU Commission commend to IT consulting companies (see the IDC document I talk about further on).

    After reading the news, I’ve been looking for what are the measures and actions that Mr. Kroes announced for fostering this 160 bn€ worth market, and I only find out only two paper“COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS, about Unleashing the Potential of Cloud Computing in Europes:
    The IDC’s document is the result of a study carried out by IDC EMEA in the period October 2011-June 2012 on behalf of DG Connect of the European Commission. It analyses the business barriers for adopting the Cloud in Europe: this barriers have not stopped public cloud adoption so far, but have limited the number of cloud solutions adopted. (Note: At the moment I’m going to forget the analyses of those barriers in this post; by the way something about was focused in previous posts, and I’ll come back in the future). IDC also studied two opposite scenarios to resolve that barriers: the “No Intervention” scenario and the “Policy-driven” scenario (where cloud barriers were removed with a set of coordinated actions). According to this study, and copying from it, policy actions aimed at removing barriers to cloud can have a relevant impact on its adoption, increasing the value of spending on public clouds from €35 billion (No Intervention scenario) to almost €80 billion (Policy-driven scenario) by 2020. Moreover, keeping copying from the report, the diffusion of cloud computing is expected to generate substantial direct and indirect impacts on economic and employment growth in the EU, thanks to the migration to a new IT paradigm enabling greater innovation and productivity. According to the model developed by IDC, the “No Intervention” scenario” of cloud adoption could generate up to €88 billion of contribution to the EU GDP in 2020. The “Policy-driven scenario”, instead, could generate up to €250 billion GDP in 2020, corresponding to an increase of €162 billion over the first scenario. Cumulative impacts would of course be even stronger. IDC estimates a cumulative impact for the period 2015-2020 of some €940 billion in the “Policy-driven” scenario, compared to €357 billion in the “No Intervention” one.

    The IDC recommendations for the most relevant policy actions (which should be included in the European Cloud Computing Strategy to create a “cloud friendly and proactive environment” in the EU and maximize the chances of achieving the benefits identified in the “Policy-driven” scenario) are:
    • Removing Regulatory Barriers
    • Building Trust in the Market
    • Protect Consumers’ Rights to Control Their Data and to Be Forgotten
    • Promoting Standardisation and Interoperability
    • Building the Business Case for Cloud Adoption
    • Contributing to the Business Case for High-speed Broadband Infrastructures
    Coming back to the first document, the Commission analyzes the already ongoing policy initiatives such as the data protection reform and the Common European Sales law that will lower barriers to the uptake of cloud computing in the EU should be adopted quickly.

    Besides, and specifically for the Cloud, there are also concerns that the economic impact of cloud computing will not reach its full potential unless the technology is adopted by both public authorities and small to medium sized enterprises (SMEs). In both cases adoption so far is marginal due to the difficulty of assessing the risks of cloud adoption. To deliver on these goals therefore the European Commission will launch three cloud-specific actions:
    • Key Action 1: Cutting through the Jungle of Standards
    • Key Action 2: Safe and Fair Contract Terms and Conditions
    • Key Action 3: Establishing a European Cloud Partnership (ECP) to drive innovation and growth from the public sector.
    The Commission will also implement a series of flanking actions to support the three key actions: International Dialogue and Stimulation Measures (but when I read the latter it I miss real incentive actions or anything that really motivates or fosters cloud business).
     
    Copying again the Commission words The next two years, during which the actions outlined above, will be developed and put into place will lay the foundation for Europe to become a world cloud computing powerhouse. The right progress during this preparation phase will provide a stable basis for a rapid take-off phase from 2014-2020 during which use of publicly available cloud computing offerings could achieve a 38% compound annual growth rate (around double the rate that would be achieved if the decisive policy steps are not implemented).”

    Well, we have a plan and actions to be taken in a short time, but are they enough? Are the timing as fast as needed? In my opinion, no, they aren’t, because I’m afraid we need more and urgent stimulation actions.

    As Ms. Kroes says Cloud computing is an opportunity our economy cannot miss. Let’s seize it, with an approach that is ambitious, effective, and European”. So what I suggest? Yes, I know, I going to repeat myself and I say nothing new, but I think again in actions and policies like the U.S. First Cloud Policy and their companying measures (the “Federal Risk Assessment Program” FedRAMP, and so on).

    Finally, I sincerely hope I am wrong about this subject.

    jueves, 27 de septiembre de 2012

    What is new in the “Folsom” version of OpenStack?

    The “Folsom” version of OpenStack is going to be delivered today. It’s the community’s sixth release. Openstack was initially launched in July of 2010 by Rackspace Hosting and NASA and to date has attracted more than 150 backers including Red Hat and SUSE, IBM, Dell, HP and even VMware.

    The previous version (“Essex”) of OpenStack was released in April of 2012 with “Nova” Compute, “Horizon” Dashboard, “Glance” imaging, “Swift” Object Store and “Keystone” Identity services.

    Maybe, the most anticipated new feature of the next OpenStack release are the new network services known as “Quantum,” which controls network virtualization, and "Cinder", formerly known as nova-volume and now developed independently of Nova (Compute) itself, which becomes a volume service and provides persistent block storage (volumes) to guest VMs, opening support for traditional SAN and NAS besides of the usual players (EMC, NetApp, and Nexenta).

    So the picture is the following:
    High level OpenStack Architecture in Folsom version 
    In words, currently there are seven core components of OpenStack: Compute, Network, Object Storage, Block Storage, Dashboard, Identity, and Image Service, and here is a reminder of them:
    • Compute (codenamed “Nova”) provides virtual servers upon demand. Big companies as Rackspace and HP provide commercial compute services built on Nova, but also other as Tissat. Besides, it is used internally at companies like Mercado Libre and NASA (where it originated).
    • Network (codenamed “Quantum”) provides “network connectivity as a service” between interface devices managed by other OpenStack services (most likely Nova). The service works by allowing users to create their own networks and then attach interfaces to them. Quantum has a pluggable architecture to support many popular networking vendors and technologies. Quantum is new in the Folsom release.
    • Object Store (codenamed “Swift”) allows you to store or retrieve files (but not mount directories like a fileserver). Several companies provide commercial storage services based on Swift. These include KT, Rackspace (from which Swift originated) Internap and Tissat. Swift is also used internally at many large companies to store their data.
    • Block Storage (codenamed “Cinder”) provides persistent block storage to guest VMs. This project was born from code originally in Nova (the nova-volume service described below). Please note that this is block storage (or volumes) not filesystems like NFS or CIFS share. Cinder is new for the Folsom release.
    • Dashboard (codenamed “Horizon”) provides a modular web-based user interface for all the OpenStack services. With this web GUI, you can perform most operations on your cloud like launching an instance, assigning IP addresses and setting access controls.
    • Image (codenamed “Glance”) provides a catalog and repository for virtual disk images. These disk images are mostly commonly used in OpenStack Compute. While this service is technically optional, any cloud of size will require it.
    • Identity (codenamed “Keystone”) provides authentication and authorization for all the OpenStack services. It also provides a service catalog of services within a particular OpenStack cloud.
    Coming back the real new component (since Cinder was previously part of Nova, tough this doesn’t mean that it wasn’t a great improvement), Quantum provides support for SDN, Sofware Defined networking: it has got a feature-rich and extensible API for programmatically defining networks. This allows for far richer network topologies to be defined, and more advanced configurations at the backend or implementing QoS and security functions. Quantum gives users of OpenStack the ability to control all aspects of their cloud compute environment, without compromising the underlying infrastructure and security of the underlying OpenStack environment; in summary, Quantum brings true multi-tenancy without any restrictions of VLANs.

    Of course other important new features has been added in the already existing components (for example, the coming back of support for Microsoft’s hypervisor, Hyper-V, removed in Essex version).

    jueves, 20 de septiembre de 2012

    Gestión de Riesgos Energéticos en un Data Center

     

    Las soluciones de Cloud Computing (Iaas, Paas, y SaaS) mejoran la eficiencia y elasticidad de la operaciones de las TIC, al permitir un mejor aprovechamiento de las plataformas TIC con el subsiguiente ahorro energético, al tiempo que permiten ofrecer servicios más aquilatados en costes y con modelo de facturación de pago por uso.
    Sin embargo, las infraestructuras Cloud hacen uso extensivo e intensivo de la virtualización, y los D.C. tienden a ser entornos muy dinámicos en las cuales las cargas de trabajo virtualizadas migran libremente entre servidores físicos, al tiempo que se produce la aparición de nuevas cargas y la desaparición de otras hasta ese momento operativas.
    Esta deseada agilidad de las TIC si no es gestionada adecuadamente puede traer, por el contrario, efectos no deseados e incluso contradictorios con sus teóricas ventajas: aprovechamiento energético no-óptimo (e incluso derroche energético), pérdidas de servicio, etc., es decir, por citar un par de ejemplos extremos:
    • nos podemos encontrar con servidores físicos dando servicio a una sola máquina virtual cada uno,
    • o por citar el otro extremo máquinas sobre-cargadas, de pobre rendimiento y que pueden provocar un exceso de consumo que, junto con otras en el mismo rack, superen las capacidades energéticas, eléctricas y/o de disipación de calor del mismo y conduzcan a costes de servicio por problemas eléctricos o caloríficos.
    Por otra parte, en segundo lugar, los DC suelen ser diseñados con márgenes de seguridad y de capacidad de crecimiento que provoca que exista energía desaprovechada y malgastada. Por tanto es importante disponer de herramientas o métodos que permitan identificarla y reducirla: suprimir la “energía desaprovechada” en el D.C. (por error en los cálculos o por un desajuste evolutivo), reducir el valor de la “energía ociosa” (la disponible para absorber la fluctuación de la cargas dinámicas del D.C.), ajustar mejor la “energía de reserva”, aquilatando el “margen de seguridad, etc.
    Pero, lógicamente, la reducción de la energía de “reserva” y de la de “seguridad” incrementa el riesgo de desaprovisionamiento en determinadas circunstancias imprevistas.
    Por otra parte, en tercer lugar, para mejorar la Eficiencia Energética de un Data Center (es decir aumentar el DCiE o disminuir el PUE), se debe:
    • disminuir el consumo energético de cualquier origen (eléctrico, gas, gasoil, etc.) de todas las instalaciones o equipos que dan servicio al DataCenter pero que no están en la categoría de TIC (climatizadores, ventiladores, etc.)
    • disminuir las ineficiencias y pérdidas que se producen en los mismos (SAIs, grupos electrógenos de backup, etc.)
    • y reducir las pérdidas en la distribución de la energía (PDUs, cuadros eléctricos, cableado, etc.)
    Pero, algunas de las medidas para la reducción de la energía no aprovechada en las TIC implican riesgos que se deben gestionar adecuadamente:
    • puede hacer peligrar la continuidad del servicio del DataCenter (p.e. la minoración de la capacidad total de los SAIs, o la reducción de las máquinas de A/C de reserva, etc.)
    • o pueden reducir la vida media de los equipos (p.e. al elevar la temperatura media del DataCenter para reducir la refrigeración)
    • o pueden hacer peligrar la buena gestión del DataCenter al eliminar elementos como PDUS, etc.
    • etc.
     
    La resolución de estos 3 problemas enumerados (solo una muestra del conjunto de la problemática), se puede conseguir mediante la correcta implementación de un completo “proceso de gestión de la capacidad” (tal como lo define la norma ISO 20.000 o las mejores prácticas ITIL) que permita optimizar el uso de los recursos energéticos existentes en un DC, sin poner en peligro ni la disponibilidad del servicio ni los niveles de servicio comprometidos (SLA). Es decir gestionar la capacidad energética controlando el riego derivado.
     
    Para ello proponemos la implantación de un sistema DCIM (Data Centre Infrastructures Management) mediante el desarrollo procesos (bien automáticos, bien manuales donde no sea posible otra opción) que permitan integrar las Instalaciones (equipamiento no-TIC) con el equipamiento TIC y orquestar las operaciones de ambos, de forma que las Instalaciones se aquilaten en todo momento a la demanda de las TIC (sin exceso innecesario que suponga derroche energético, ni defecto que ponga en juego la disponibilidad de las TIC) y con el dinamismo que precisa un DC moderno.
    DCIM como Agregador de Inteligencia Operativa
     
    Estos procedimientos seguirán en la parte TIC el modelo de proceso de la norma ISO-20.000 y en la parte de Instalaciones la PAS-55, incorporando procesos que armonizcen la operacion de las Instalaciones con la operacion de las TIC.
    Procesos de soporte a DCIM
    Procesos de soporte a DCIM
     
    Nota: Esto es un resumen de la ponencia de Tissat en el Concreso CSTIC 2012 que, con el lema “Dominando los riesgos se compite mejor”, ha sido organizado por la AEC (Asociación Española para la Calidad) y se ha celebrado en Madrid el pasado 18 de septiembre de 2012.

    miércoles, 5 de septiembre de 2012

    ¿Tienen futuro los servicios Cloud europeos?, ¿y los españoles?

     

    Nota Previa: Este artículo es una copia del prefacio que he escrito para el informe-libro “Análisis de la oferta y la demanda de los servicios Cloud Computing”, editado por el Instituto Tecnológico AIMME con ISBN/EAN13:1478313854 / 9781478313854.
     
    “Una invasión de ejércitos puede ser resistida, no así una idea cuyo momento ha llegado”
    o
    ¿Tienen futuro los servicios Cloud europeos?, ¿y los españoles?
     
    El surgimiento del “Cloud Computing” implica la redefinición de la forma de trabajar (en el más amplio concepto de la palabra) con las TIC, tanto para los Proveedores de Servicios, como para los Usuarios de los mismos (ya sean ciudadanos, ya sean autónomos o profesionales, ya sean los empleados de una empresa, organismo o entidad de cualquier tipo, ya sean los departamentos TIC de esas empresas o entidades).
     
    En sus inicios había mucha gente, y hoy en día aún quedan algunos (especialmente entre las empresa de sector TIC que no han sabido “hacer sus deberes” y no está aún metidos en el mercado), que decía que el “Cloud Computing” era una palabra de moda (buzzworld) y no hay duda que eso ha sucedido en muchas ocasiones con conceptos detrás de los cuales no había nada (nuevo) y que cada vez sale un concepto hay gente que los aprovecha y lo lleva a límites muy forzados (como XaaS o “Lo-que-sea as a Service”).
     
    Sin embargo, el Cloud Computing aprovechando y combinando los avances de muchas otras tecnologías (en muchos casos subyacentes dentro del modelo, como la virtualización) responde por primera vez a las expectativas de los ciudadanos y de las empresas de usar las TIC como una “Utility” (es decir, como las empresa de servicios eléctricos, gas, agua, etc.). Y es que, como dijo Victor Hugo, “una invasión de ejércitos puede ser resistida, no así una idea cuyo momento ha llegado.”
     
    Por ello todos los especialistas coinciden en que si bien el concepto de Cloud Computing aún va a sufrir transformaciones y evolucionar, sin embargo “no es una moda pasajera, sino que ha llegado para quedarse”. Esa evolución vendrá tanto de la propia innovación de la industria creando nuevas aplicaciones y servicios para el usuario final, como de los retos y desafíos, u obstáculos y barreras, según se mire, que el Cloud Computing aún tiene que afrontar y resolver: desde la seguridad hasta un cambio cultural, organizacional o de procesos, pasando por la ausencia de estándares, los problemas de interoperabilidad, de portabilidad, de “confianza”, de prestaciones, de SLAs, de regulaciones y legislación, etc.
     
    La solución a estos desafíos no está en un producto, ni en una técnica, ni en un método, ni en un proceso concreto, sino en una “normalización” de todas las actividades necesarias para garantizar la seguridad, y resolver el resto de retos pendientes. Una encuesta reciente de ENISA (la agencia de seguridad cibernética de la UE) sobre los Acuerdos de Nivel de Servicio (ANS) mostró que muchos funcionarios en las organizaciones del sector público apenas reciben observaciones sobre los factores importantes de seguridad, como la disponibilidad del servicio, o las debilidades del software. Con el fin de ayudar a solventar este problema, ENISA ha lanzado este mismo año 2012 (en abril) una guía práctica dirigida a los equipos informáticos de adquisición de servicios TIC, centrándose en la vigilancia continua de la seguridad en todo el ciclo de vida de un contrato en la “Nube”: se trata de la EU Procure Secure: A guide to monitoring of security service levels in cloud contracts”. La publicación de esta guía se produce unos poco meses más tarde de que febrero la Administración Norteamericana publicara la “Federal Risk Assessment Program” (FedRAMP), cuyo objetivo es evaluar y asegurar el riego mediante la normalización de más de 150 “controles de seguridad” y con los que se establecen los requisitos de seguridad comunes para la implementación de Clouds en determinados tipos de sistemas. De esta forma los proveedores que quieran vender sus servicios a la Administración Federal Norteamericana deberán adherirse al programa y demostrar que cumple con dichos controles.
     
    Además de los muchos puntos comunes, la primera diferencia entre ambos es que mientras que la europea es una “guía”, la americana incluye, además, un programa de certificación para las empresas que quiera contratar con su Administración Federal, regulación que no ha sido vista (por los proveedores) como un obstáculo, sino como un incentivo para el negocio. Sin embargo, en mi opinión, la principal diferencia es que la FedRAMP americana es la consecuencia y el paso lógico tras un hecho diferenciador (punto de ruptura) que es la publicación a finales del 2010 de la ”Cloud First Policy” con la que la Administración Obama (a través de la Office of Management and Budget, OMB) decidió impulsar el uso del Cloud Computing entre todos los organismos federales para poder reducir el coste de los servicios exigiendo a las Agencias Federales de EE.UU. el uso de soluciones Cloud cuando las mismas existan y sean seguras, fiables y más baratas.
     
    En una muy reciente revista (junio del 2012), la Comisionada Europea para la Agenda Digital (Ms. Neelie Kroes) declaró que Europa no está defendiendo una Nube Europea, sino lo que Europa puede aportar a la Nube, y aclaró que es un concepto que no contempla fronteras, por lo que la legislación deberá recoger estos aspectos, sin abandonar los derechos de protección de los datos personales que asisten a los ciudadanos europeos. Estas declaraciones suceden a una también reciente publicación de un informe de Gartner (una de las más prestigiosas empresas consultoras del sector) que afirma que Europa esta 2 años por detrás de USA en temas de Cloud. Pese a reconocer que el interés por la Cloud en Europa es muy grande, y que las oportunidades que el Cloud Computing ofrece son válidas para todo el mundo, sin embargo, según Gartner, los riesgos y costes del Cloud, principalmente seguridad, transparencia e integración (lo cuales son aplicables a todo el mundo), adquieren una idiosincrasia y relevancia especial en Europa que actúan como frenos (o, al menos, “ralentizadores”) de la adopción del Cloud en Europa:
    • En primer lugar, las diversas (incluso aún cambiantes) regulaciones de los países europeos obre la privacidad inhiben el movimiento de los datos personales en la Nube. Este aspecto que, según algunos puede facilitar el predominio de algunas empresas que basan su negocio en la geolocalización de la Nube dentro de las fronteras de un país (o zona) sin embargo está produciendo el efecto de que muchas otras compañías eviten a los Proveedores de Servicios Cloud Europeos (CSP, o Cloud Services Providers) por miedo a conflictos con la legislación europea frente a la americana.
    • En segundo lugar la complejidad de la integración de los procesos de negocio (B2B) en Europa, si bien ha favorecido a algunos Proveedores Europeos, una vez más esa misma complejidad hace que resulte difícil que se alcance una masa crítica y por lo tano se ralentice la aparición de empresas que ofrezcan Servicios Cloud a lo largo de toda Europa.
    • En tercer lugar, la lentitud de las prácticas políticas y de los procesos legislativos paneuropeos, así como la propia variedad legislativa entre los distintos países obstaculizan el negocio de los CSP.
    • Por último, a estos 3 factores anteriores, se une el efecto que sobre las inversiones produce la crisis de débito existente en la eurozona.
    Frente a estos retos, en mi opinión, de momento, solo tenemos buenas intenciones que se plasman en grandes palabras y pocos hechos (solo unos buenos pero tímidos pasos y, a mi juicio, insuficientes) de la Comisión Europea.
     
    Además, en Europa la influencia del sector público es mucho más importante que en USA (donde el sector privado es, en sí mismo, mucho más dinámico y ágil). Por ello tanto la Administración de la Comisión Europea como las distintas Administraciones de los Países miembros tienen un papel importante, primordial, en el fomento del Cloud Computing en Europa tanto como usuarios y consumidores de servicio Cloud, como facilitando el desarrollo del negocio en torno al Cloud Computing. Y es por ello que creo que en Europa hace falta definir una “Política Cloud” que propicie claramente el uso del Cloud en todas las Administraciones Públicas Europeas, tanto de la Comisión como de los Países Miembros (al estilo de la “First Cloud Policy” de EE.UU.), de forma que se fomente el mercado Cloud tanto para los proveedores de servicios (CSP) como las empresas consumidores de dichos servicios, así como las inversiones en Investigación y Desarrollo en esta área. La Comisionada Europea, Ms. Kroes, afirmaba que Europa está llena de gente con talento para conseguirlo, pero sin duda se necesita que haya un mercado que lo demande y una regulación-legislación que lo permita, si no de nuevo “perderemos el tren”.
     
    En lo que a nuestro país concierne, el “Informe de Recomendaciones para la Agenda Digital en España” (presentado hace escasos días, el 18 de junio, y elaborado por un Grupo de Expertos a quien el Gobierno encomendó su elaboración) reconoce que “España se enfrenta a una crisis de su economía marcada por la particular configuración de los riesgos endémicos –burbuja inmobiliaria, crisis financiera, etc.-, la existencia de deficiencias estructurales y los desequilibrios respecto a otras economías centrales de la zona euro, elementos que están amplificando los efectos negativos de la adversa coyuntura internacional”. También afirma que “la adopción inteligente de tecnologías digitales permitirá impulsar el crecimiento, la innovación y la productividad, contribuyendo a evitar que se trunque la trayectoria de transformación y modernización que ha experimentado la economía española en las últimas décadas.” Y entre los principales factores de cambio destaca en primer lugar “la transición al cloud computing como mecanismo de entrega eficiente de servicios”, sin olvidar otros tan importantes como “la generalización de la movilidad, el aumento en la disponibilidad de banda ancha ultrarrápida, el desarrollo de la Internet de las Cosas, y el amplio uso de dispositivos que, como smartphones y tablets.
     
    Sin embargo, mientras en el Reino Unido el Gobierno ha creado hace unos meses el ”UK CloudStore” (un sistema diseñado para facilitar, simplificar y abaratar el proceso de selección y provisión de servicios Cloud para los sector público), en España llevamos más de 10 años sin renovar el “Catálogo de Patrimonio” para la provisión de servicios de DataCenter, entre los que esperamos, cuando se saque el Concurso correspondiente, estén integrados los Servicios de Cloud Computing (al menos los de tipo IaaS, es decir “Infrastructures as a Service”); y deseemos además que su resolución no conlleve un proceso de más de 2 años (por los recursos planteados) como ha sucedido con el Concurso para “Desarrollo de Sistemas de Información”.
     
    Por último mi mayor deseo es que mis palabras sean tan efímeras, o más que los datos estadísticos que se recogen y analizan en este informe, necesarios pues reflejan la situación y conocimiento del Cloud Computing en las PYMEs y apuntan sobre qué aspectos trabajar para mejorar la situación, pero que todos deseamos que pierdan su vigencia cuanto antes, pues ese hecho significará una gran señal de progreso para el mercado TIC español en particular, y para la evolución de la economía española en general.
     
    Nota Final: Para reflejar mejor la situación en España debo aclarar que, tras la publicación del mencionado informe/libro del que he copiado este artículo, ha sido convocado (y aún está en proceso de licitación) el Concurso para entrar en Catálogo de Patrimonio del Estado para “Servicios de alojamiento de sistemas de información” que respecto a los Servicios Cloud establece, cito textualmente, que: “Los licitadores podrán indicar en su oferta si están en condiciones de realizar servicios de alojamiento basados en cloud computing en el caso de ser adjudicatarios. Dichas condiciones se trasladarían a los contratos basados en el acuerdo marco que contemplen esa posibilidad. La inclusión de ese tipo de soluciones no es obligatoria para licitar.”

    lunes, 16 de julio de 2012

    Personal Health Data Management in the FASyS project

    Reviewing the series of posts that I dedicated to FASyS project, I just discovered that I forgot an important FASyS aspect (where Tissat is also involved): the personal health data management.
    As stated in a previous post, the EHR (Electronic Healthcare Record) stores the medical data of the patient from the point of view of the assistance process, and it is owned by the current healthcare system. With the purpose of improving the characterization of the person and his environment, EHR data have been increased. This new amount of information is stored, together with the EHR information, in other repositories. These repositories are known as PHR (Personal Health Record) and can collect data such as habits, preferences, information about the family, moods, customs or nutritional profile. A PHR adapted to the needs of FASyS requirements is being developed.
    This kind of repositories is owned by the person, who has the option to share it with people he chooses: when an employee goes to work in a factory for the first time, health professionals ask him to download his previous EHR, in order to have the personal file (PHR) more complete for the final diagnosis. PHR developed by FASyS is based on the following aspects:
    • It is focused on workplace health.
    • It allows patient to introduce data (automatically or manually)
    • It allows an exchange of information with the healthcare system (EHR)
    • It includes an option to generate summaries to share information with others PHR. One of the advantages, for example, is when a worker goes to work in other factory. If the new factory has the FASyS system, his PHR could be downloaded in the system of the new factory in order to have a more complete file.
    • Stored Data can also be extracted for consultations in case health professionals need to do. Consequently, there must be an Access Control. With this control, it is ensured that these personal data can only be seen by authorized people. If the data have to be used for statistical studies, it must be made anonymous. So, the results of the studies will not be related to people in particular.

    Therefore, PHR (Personal Health Record) of the patient that is mainly compound of:
    • The health data gathered by medical body sensors that the worker wears: part of FASyS system.
    • The environmental data gathered by FASyS on-premises sensors: part of FASyS system.
    • The EHR (Electronic Health Record) i.e. official health state of the worker (as patient) with data as diseases, surgically interventions or pain that were diagnosed by private o public Health Institution (Hospital, Medicine Dr., and so forth): external information but integrated in the FASyS system.
    • The historic of data and events related to the worker/patient collected by the FASyS system.

    martes, 3 de julio de 2012

    “Workflows and Choreographer” in the FASYS project.

    As it was explained in previous posts, a Health Continuous Vigilance System” is been developed inside the FASyS Project. It provides a workable solution in order to improve the current healthcare system in the factories of machining, handling and assembly operations. With such system, it’s possible to obtain a more continuous monitoring of the worker, improving his own health and making the factory safer and healthier, by increasing the number of variables obtained from the environment of the worker, from personal parameters, and by combining them with the medical knowledge and the actuation protocols.

    One of the main modules of this Health Continuous Vigilance System is the Response Medical Center (RMC) where the main tool is the NOMHAD application whose first version that is all but finished (not including environmental data). Once the information has been monitored and classified, the next point is focused on the intervention. With the aim of representing prevention protocols for this intervention, workflows are developed. Given the workers singularity, the adaptation of the prevention protocols is needed for each one of them. In this way, the elimination of the occupational hazard is much more effective.

    In order to illustrate graphically the action to perform and the standards describing the flow followed by actions, it is possible to use Workflows. They are a formalization of the process to be automated. Some “workflow languages” can be executed automatically. This is known as a workflow interpretation. The automatic interpretation of a workflow is done by a “workflow engine”, which can complete the actions explained in a workflow, in the order and with the derivation rules specified in it. Workflows can be employed by people who are not experts in programming for the health area. For that reason and thanks to these modules, health professionals are able to design and modify the protocols to be executed automatically. And that’s the reason of the importance of Workflows in the FASyS Project. In order to give support to this important piece a lot of already existent workflow languages and workflow engines has been studied and tested, but we finally decide built the ours: TPA and TPA-engine.

    Coming back to the process of gathering information about the worker (either personal parameter or environmental data) it should be noted that it’s necessary to interconnect sensors and services in a fault tolerance and decentralized way. This process, complex and highly interconnected, can be solved using a “Choreography of Services”. This means that the “choreographied” processes are independent and can communicate each other to define execution flows. This model makes easier the connection and disconnection of services dynamically, and at the same time it is capable of using different kind of sensors and configurations. This approach is shown in next figure:

    FASyS Architecture

    The use of “choreography” to interconnect services requires also the use of a common exchange language to allow the services to understand each other. This can be performed by an architecture which includes a Semantic layer in the Choreographer. The reason to do that is to improve the intercommunication among sensors, actuators and services of the system. From the programmer’s view, “choreography”·is similar to SOA but with some important differences: services are not discovered but known from a defined list, besides the code is quite light (thought for devices with very small computing performances), and so on.

    The ontologies are a solution to describe concepts formally. Concretely, an ontology is a formal and explicit specification of a shared conceptualization. It provides a common vocabulary that can be used to model the kind of objects and/or concepts and its properties and relations. The “Ontologies Reasoners” are software applications allowing the semantic seek in the ontologic description. Using this technology, it is possible to describe semantically the sensors and data services, giving them the ability of having a more complete understanding of the collected data and the services actions.

    The use of services of Ontologies and Reasoning Systems to describe the data coming from the sensors, makes possible to get a more precise interpretation and to detect automatically the sensors and services available at any time.

    Finally, referring to figure, a “Services Orchestrator” is also included in the architecture and moreover. “Orchestrator” is an automata engine that is connected to the Choreographer which accepts the use of Workflows relating different services linked by the “Choreographer”.

    Although is not shown in the above figure, a “Process Mining” module has also been added. The main objective of this Process Mining Module is to build, from all the events data gathered about a worker, the real process followed during an activity (this process could be either the consequence of a workflow assigned by medical professional or the one in working with dangerous machine. In such way the current process can be compared against, depending on the case, the one established by the medical staff or the one he use to do. Therefore, we can detect either the break of medical workflow, or an anomalous conduct in worker habits that could be the visible effect of an internal (physical or psychological) problem.

    miércoles, 27 de junio de 2012

    FASYS project once again: the Health Continuous Vigilance System

    After introducing the FASyS project, and briefing the NOMHAD Application, today we are going to treat the “Health Continuous Vigilance System” itself:

    Next figure presents the architecture for the “Health Continuous Vigilance System” developed inside the FASyS project. This system is based on five main parts: Monitoring Module; Response Medical Center; Differential Diagnosis Module; Prevention Plans Module; and Intervention Module. Each of these parts can be influenced by a number of external variables and parameters such as the Electronic Healthcare Record (EHR).
    Health Continuous Vigilance System

    Given the big amount of generated information in this model, it is necessary to process all the collected data, since such amount of information would not be easily understandable by health professionals. Services and intelligent devices that have been generated will provide a classification of the monitored data. Some data will be set inside a normal range, and others will be out of the settled limits, generating alarms due to this. Furthermore, this classification will help the doctor to organize and evaluate all workers’ data and at the same time it will be able to act more precisely against a particular diagnostic.

    Therefore, the Monitoring Module is where is stored all the personal information gathered by the system about the worker himself. All these personal data, obtained from the monitoring, are complemented and improved by variables from several sensors in the factory (collecting temperature or humidity, that is, particular characteristics from the workplaces at which the worker can stay during a work journey), that are joined all together in the Response Medical Center (RMC). The RMC allows to filter and organize the population depending on the changeable rules and on the user role. So, it is in this module where the first amount of data is collected, creating, as a result, personalized records of the workers and establishing alerts which make easier the task of health professionals.

    The information stored in the previous module is not enough to make a complete diagnosis. So, data from other sources are needed, such as:
    • Data from a medical base of knowledge (it contains relations among diseases, risks, medical tests, medical recommendations, etc).
    • Personal data from the health system, which include the previous medical history and it is known as Electronic Healthcare Record (EHR).
    • Trend analyzer data. This system is in charge of detecting how some parameters of a person are changing during the pass of time. These parameters can be added to the absolute values in order to get a more complete evaluation of the person.
    • Evaluation module results. It can be defined as a “photograph of the person” in a particular moment, with no need to detect a problem.
    These four mentioned sources are the subsystems which provide important information to the main blocks. To manage all this information, a Differential Diagnosis Module has been developed (Dr. House influence gets far). This module, through intelligent systems, helps in making decision to health professionals.

    The next step is to reach the Prevention Plans Module, where it is defined how to act. The plans to be taken can be of two types: in one hand a medical diagnosis and in the other hand a technical diagnosis, for instance a redesign of the workplace. It is important to remark that these actions are not exclusive. According to this, different levels of action can be established; that is, from very complex levels to more simple levels such as, for example, reminder panels. In addition, the prevention actions carried out in this module can be conducted at three level:
    • At the first level, the system reacts automatically. When one of the collected data reaches a condition that the professional wants to be controlled, there is an automatic reaction. This associated reaction can be the activation of an alarm, a protocol in a situation of risk, etc. These automatic reactions are achieved using ECA rules (i.e. Event-Condition-Action rules).
    • At the second level, health professionals receive the alerts and then they react in consequence, acting on individual workers. The reaction of professionals can be the assignation of a prevention plan developed before, or the assignation of a prevention plan modified for the worker situation in particular. These ways that define the processes are called workflows. (Note: these workflows will be treated in a more detailed way in next posts).
    • The third level is in charge of providing knowledge for the other two levels, improving the protocols, adapting them to new situations and personalizing the recommendations. Innovative intelligent tools are used to manage it.
    Finally, the Intervention Module is responsible for performing the particular actuation selected for the problem in question.

    How it can be seen in the above figure, the “Health Continuous Vigilance System” is cyclic and works in a continuous fed-back learning way, therefore, after the Intervention Module, it starts again from the Monitoring Module.

    Another important aspect to take into account is the personal data privacy. As a consequence of this, only a few people will have access to the EHR (Electronic Healthcare Record), to the personal variables, and to the personal diagnosis.

    Finally, to complete a previous post, it should be emphasized that NOMHAD application is used in the Response Medical Center Module; in fact it is the main tool of this module.

    A next post will be dedicated to complete the vision of Tissat collaboration on FASyS Project, and we’ll focus on workflows used for prevention protocol implementation, as well as in the use of a “choreographer” to interconnect services, and in the management of PHR (Personal Health Record).

    lunes, 25 de junio de 2012

    Lego Data Centre … and Walhalla, a European Tier IV DataCenter

    Let’s come back for a while to childhood and let’s play with our Legos, this time with a funny new Lego Data Center game:

    https://www.youtube.com/v/ekDesN76pQ4?version=3&feature=player_embedded


    and now, after this funny time, let’s make business in Walhalla, a Tier IV DataCentre certified by The Uptime Institute and winner of DataCentre Leaders Award in 2010:


    Walhalla TIER IV LOGO

    Walhalla in Mapa Tier IV