Category Archives: Cloud Computing

A Modern Enterprise Architecture Is Essential for Scaling Agile

Why Modern Enterprise Architecture is Key to Agile Scaling 

In today’s fast-paced digital landscape, agility and scalability have become fundamental for businesses striving for competitive advantage and innovation. Agile methodologies, once the province of software development teams, are now being scaled across entire organizations to enhance flexibility, responsiveness, and customer satisfaction. 

However, scaling agile practices is not merely a matter of expanding principles from a single team to many. It necessitates a foundational shift in how a company’s infrastructure – its enterprise architecture (EA) – is designed and implemented. A modern enterprise architecture is pivotal in ensuring that the scaling of agile methodologies is successful, sustainable, and aligned with business objectives.

i. What is Modern Enterprise Architecture?

Modern enterprise architecture (EA) is a strategic approach to designing and aligning an organization’s technology landscape with its business goals. It provides a blueprint for how applications, data, and infrastructure should be structured to support agility, scalability, and innovation.

ii. How Modern EA Supports Agile Scaling

o Alignment: Modern EA ensures that agile development teams are working towards a common goal by providing a shared vision of the target architecture.

o Modularity and Flexibility:  A well-designed architecture breaks down complex systems into smaller, independent components that can be easily integrated and modified. This enables agile teams to deliver features faster and respond to changing requirements.

o Center of Excellence:  Modern EA fosters a collaborative environment where architects act as advisors and coaches, supporting agile teams throughout the development lifecycle.

iii. Agile at Scale: The Need for a Modern Enterprise Architecture

While Agile methodologies have proven effective at the team level, scaling Agile across large organizations presents unique challenges. Traditional monolithic architectures, with their rigid structures and siloed systems, are ill-suited for the rapid pace and collaborative nature of Agile development. As teams grow in size and complexity, coordination, communication, and alignment become increasingly challenging. Without a modern enterprise architecture that can support Agile principles and practices, organizations risk inefficiency, duplication of efforts, and disjointed customer experiences.

iv. The Symbiosis of Agile and Modern Enterprise Architecture

A. Flexibility and Responsiveness: A modern enterprise architecture is inherently designed to support flexibility and rapid change. It adopts modular, service-oriented designs that allow for parts of the IT system to be changed or upgraded without disrupting the whole. This modular approach is harmonious with agile’s iterative development and continuous delivery models, allowing businesses to respond swiftly to market changes or new customer demands.

B. Enhanced Collaboration and Visibility: Agile methodologies thrive on collaboration and cross-functional team dynamics. Modern EA frameworks facilitate this by promoting transparency and interconnectedness among systems, data, and processes. By fostering an environment where information flows freely and systems are integrated, organizations can break down silos and encourage more cohesive and cooperative work practices, which are essential for scaling agile.

C. Strategic Alignment: Scaling agile requires more than the adoption of flexible working practices; it demands alignment between IT initiatives and business objectives. Modern enterprise architectures support this by providing a roadmap that guides not only IT strategy but also how it aligns with broader business goals. This ensures that agile scaling efforts are driving value and are in sync with the company’s strategic vision.

D. Integrated Systems and Data: Siloed systems and segregated data repositories create barriers to Agile scaling, leading to inefficiencies and inconsistencies. A modern EA emphasizes integration and interoperability, ensuring that systems and data are seamlessly connected and accessible, thereby enhancing collaboration and decision-making speed.

E. Sustainability and Scalability: A common challenge in scaling agile is maintaining the momentum and practices as more teams and complexities are added. Modern EA helps address this by building scalability into the system’s core, ensuring that the infrastructure can handle growth without performance degradation. This includes considerations for cloud computing, data management, and application scalability, ensuring that the enterprise can grow without compromising agility.

F. Innovation Support: Finally, by providing a flexible, aligned, and scalable foundation, a modern enterprise architecture fosters an environment conducive to innovation. Agile teams can experiment, iterate, and deploy new solutions with confidence, knowing the underlying architecture supports rapid development cycles and the continuous evolution of products and services.

v. Implementing Modern Enterprise Architecture for Agile Scaling

Implementing a modern EA to support agile scaling is not without its challenges. It requires a deep understanding of both the current state of the organization’s architecture and its future needs. 

Key steps include:

o Assessment and Planning: Evaluating the existing architecture, identifying gaps, and planning for a transition to a more modular, flexible, and scalable architecture.

o Technology Standardization: Rationalizing technology stacks and investing in tools and platforms that support agile practices and integration needs.

o Cultural Shift: Beyond technology, fostering a culture that embraces change, learning, and collaboration across all levels of the organization.

o Governance and Compliance: Establishing governance models that support agility while ensuring compliance and security are not compromised.

vi. Key Elements of a Modern Enterprise Architecture

A modern enterprise architecture is designed to facilitate agility, collaboration, and innovation at scale. It provides the foundation for seamless integration, continuous delivery, and cross-functional collaboration, enabling organizations to adapt quickly to changing business needs and market demands. Several key elements are essential for building a modern enterprise architecture that supports scaled Agile:

A. Microservices Architecture: Breaking down large, monolithic systems into smaller, independently deployable services allows for greater flexibility, scalability, and agility. Microservices enable teams to work autonomously, iterate quickly, and release software updates independently, without disrupting other parts of the system.

B. Cloud Computing: Leveraging cloud infrastructure provides the scalability, elasticity, and reliability needed to support Agile development practices. Cloud platforms offer on-demand access to computing resources, enabling teams to scale their infrastructure dynamically to meet changing demands and optimize costs.

C. DevOps Practices: Embracing DevOps principles and practices streamlines the software delivery pipeline, from development to deployment and beyond. Automation, continuous integration, and continuous delivery (CI/CD) enable organizations to release software more frequently, reliably, and with reduced lead times, fostering a culture of collaboration and innovation.

D. API-First Approach: Adopting an API-first approach to software development promotes modularity, interoperability, and reusability. APIs serve as the building blocks of digital ecosystems, enabling seamless integration and interoperability between disparate systems and applications, both internally and externally.

E. Event-Driven Architecture: Embracing event-driven architecture facilitates real-time data processing, event-driven workflows, and asynchronous communication between services. Events serve as triggers for business processes, enabling organizations to respond quickly to changing conditions and deliver timely, personalized experiences to customers.

vii. Benefits of a Modern Enterprise Architecture for Scaling Agile

Cloud Computing and Business Agility

A modern enterprise architecture offers numerous benefits for organizations seeking to scale Agile practices effectively:

o Enhanced Flexibility: Modular, loosely coupled systems enable teams to respond quickly to changing requirements and market conditions, fostering adaptability and innovation.

o Improved Collaboration: Seamless integration, automated workflows, and cross-functional collaboration promote alignment, transparency, and knowledge sharing across the organization.

o Faster Time-to-Market: Streamlined development and delivery pipelines, coupled with scalable infrastructure, enable organizations to release software updates more frequently and reliably, accelerating time-to-market and reducing time-to-value.

o Better Customer Experiences: Agile development practices, combined with real-time data processing and event-driven workflows, enable organizations to deliver personalized, responsive experiences to customers, driving satisfaction and loyalty.

viii. Conclusion

The symbiosis between a modern enterprise architecture and Agile practices is a critical enabler for organizations aiming to scale agility and thrive in a digital-first world. 

A modern EA provides the structure, visibility, and alignment necessary to scale Agile effectively, turning it from a team-based methodology into a comprehensive enterprise-wide strategy. 

As companies increasingly recognize the value of both Agile and a modern EA, the fusion of these approaches will continue to be a hallmark of successful digital transformation initiatives. 

By investing in the development and continual evolution of a modern EA, organizations can ensure the scalability, flexibility, and responsiveness required to excel in today’s dynamic business environment.

ix. Further references 

SponsoredLeanIXhttps://www.leanix.netDownload free White Paper – Enterprise Architecture

A Modern Enterprise Architecture Is Essential for Scaling Agile

LeanIXhttps://www.leanix.net › blog › su…Using Enterprise Architecture To Support Scaled Agile

LinkedIn · Timo Hammerl100+ reactionsAgile Architecture: A Comparison of TOGAF and SAFe Framework for Agile Enterprise …

Scaled Agile Frameworkhttps://scaledagileframework.com › …Enterprise Architect

Advised Skillshttps://www.advisedskills.com › 4…Open Agile Architecture: A Comprehensive Guide for Enterprise …

Bain & Companyhttps://www.bain.com › insightsDigital Innovation: Getting the Architecture Foundations Right

The Essential Projecthttps://enterprise-architecture.org › …Is your Enterprise Architecture delivering value?

SponsoredLeanIXhttps://www.leanix.netEnterprise Architecture – Frameworks and Methodologies

agiledata.orghttps://agiledata.org › essays › enter…Agile Enterprise Architecture: Collaborative …

Architecture & Governance Magazinehttps://www.architectureandgovernance.com › …SAFe and Enterprise Architecture explained in 5 points

LinkedIn · Bizcon7 reactionsThe Role of Enterprise Architecture in Business Agility and Resilience

Speaker Deckhttps://speakerdeck.com › modern…Modern Enterprise Architecture: Architecting for Outcomes

CIOPages.comhttps://www.ciopages.com › agile-e…Agile Enterprise Architecture: Ongoing and Enduring Value from AEA

Medium · Aman Luthra10+ likesRoles and Responsibilities: Enterprise Architect | by Aman Luthra

staragile.comhttps://staragile.com › blog › scale…Navigating Business Agility: The Role of a Scaled Agile Architect

Conexiamhttps://conexiam.com › agile-devel…Understanding Enterprise Architecture and Agile

Capsterahttps://www.capstera.com › enterpri…The Ultimate Guide to Enterprise Architecture Management

Agile meets Architecturehttps://www.agile-meets-architecture.com › …How the Agile Mindset is Integral to Architecting Modern Systems

ResearchGatehttps://www.researchgate.net › 220…(PDF) Enterprise architecture: Management tool and blueprint for the organisation

Anders Marzi Tornbladhttps://atornblad.se › agile-softwar…The role of software architects in Agile teams

The Fundamentals of ISO/IEC 27032 

The Fundamentals of ISO/IEC 27032: Cybersecurity Guidelines for Cyber Hygiene

i. What is ISO/IEC 27032?

ISO/IEC 27032 is an international standard published jointly by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). It provides guidelines for cyber hygiene, which are the essential practices that individuals and organizations should follow to maintain good cybersecurity posture.

ii. Why is ISO/IEC 27032 important?

In today’s digital world, cyber threats are constantly evolving. Cybercriminals are becoming more sophisticated, and the potential consequences of cyberattacks are more severe than ever. Implementing and maintaining good cyber hygiene is essential for protecting your organization’s information assets, systems, and people from cyberattacks.

iii. What are the key principles of ISO/IEC 27032?

ISO/IEC 27032 is based on five key principles:

A. Due care: Organizations should take reasonable steps to protect their information assets.

B. Proportionality: The level of cybersecurity protection should be proportionate to the risks involved.

C. Accountability: Individuals and organizations should be accountable for their actions that may impact cybersecurity.

D. Continuous improvement: Organizations should continuously improve their cybersecurity practices.

iv. There are many benefits to implementing ISO/IEC 27032, including:

A. Improved cybersecurity posture: Implementing the guidelines in ISO/IEC 27032 can help to improve your organization’s overall cybersecurity posture and reduce the risk of cyberattacks.

B. Reduced costs: Cyberattacks can be very expensive. Implementing good cyber hygiene can help to prevent cyberattacks and save your organization money.

C. Enhanced reputation: A strong cybersecurity posture can help to improve your organization’s reputation and make it more attractive to customers and partners.

D. Increased compliance: ISO/IEC 27032 can help your organization to comply with other cybersecurity regulations and standards.

v. Who should implement ISO/IEC 27032?

ISO/IEC 27032 is intended for all organizations, regardless of size or industry. However, it is particularly relevant for organizations that:

o Handle sensitive information

o Have a large number of employees

o Operate in critical infrastructure sectors

vi. Here are some fundamental aspects of ISO/IEC 27032:

A. Purpose: The primary goal of ISO/IEC 27032 is to promote safer and more secure transactions and interactions in cyberspace by preventing data breaches and lowering potential risks.

B. Scope: It addresses various aspects of information security, critical and emerging cyber threats, cybersecurity control mechanisms, and incident management. It also covers the protection of Privacy and personally identifiable information (PII).

C. Cybersecurity Guidelines: The standard offers guidelines for improving the state of Cybersecurity, drawing out the unique aspects of that activity and its dependencies on other information security domains, particularly information security management system (ISMS), Network Security, Incident Management, and Application Security.

D. Role in Cyberspace: ISO/IEC 27032 explicitly addresses bystander roles, considering the active involvement and responsibility of various stakeholders in cyberspace, including individual users, private organizations, businesses, non-profit institutions, and governments.

E. Risk Management: The standard emphasizes the importance of risk management in the context of cybersecurity. Organizations are encouraged to identify, assess, and manage risks associated with their information systems and processes within cyberspace.

F. Interactions with Stakeholders: This standard encourages interactions between different stakeholders within an organization to enhance understanding and coordination of various roles in Cybersecurity.

G. Collaboration: ISO/IEC 27032 promotes the collaboration between various entities, which is seen as essential due to the interconnected nature of cyberspace. Building partnerships and sharing information on threats, vulnerabilities, and incidents is fundamental for enhancing Cybersecurity.

H. Incident Response: It provides guidance on coordination between different types of incident response groups, allowing for more effective and unified response efforts.

I. Interdependencies: Recognizes the complex interdependencies between different information systems and the need to understand these relationships to manage risks comprehensively.

J. Key Principles: ISO/IEC 27032 promotes principles such as the understanding and ensuring the appropriate use of information, the incorporation of management system processes, and the safeguarding of stakeholder’s actions.

K. Integration with ISO/IEC 27001: ISO/IEC 27032 is designed to complement ISO/IEC 27001, the standard for information security management systems (ISMS). Organizations are encouraged to integrate their cybersecurity efforts with their ISMS.

L. Compliance and Certification: While ISO/IEC 27032 itself is not a certification standard, organizations can use its guidelines to enhance their cybersecurity practices. Certification may be pursued separately, such as through ISO/IEC 27001.

vii. How can I implement ISO/IEC 27032?

There are a number of steps you can take to implement ISO/IEC 27032, including:

A. Conduct a risk assessment: Identify the cybersecurity risks that your organization faces.

B. Develop a cybersecurity policy: Define your organization’s approach to cybersecurity.

C. Implement cybersecurity controls: Implement the controls outlined in ISO/IEC 27032.

D. Train your employees: Train your employees on cybersecurity best practices.

E. Monitor and review your cybersecurity program: Regularly monitor and review your cybersecurity program to ensure that it is effective.

viii. Additional resources:

o ISO/IEC 27032 website: [https://www.iso.org/standard/44375.html](https://www.iso.org/standard/44375.html)

o National Institute of Standards and Technology (NIST) Cybersecurity Framework: [https://www.nist.gov/cyberframework](https://www.nist.gov/cyberframework)

LinkedIn · PECB40+ reactions  ·  6 months agoThe Fundamentals of ISO/IEC 27032 – What You Need to Know

StandICT.euhttps://standict.eu › index.php › isoi…ISO/IEC 27032:2012Information technology — Security techniques

ResearchGatehttps://www.researchgate.net › figureCybersecurity according to ISO/IEC 27032:2012 The term ” …

ix. Conclusion 

In summary, ISO/IEC 27032 provides a holistic approach to cybersecurity, emphasizing collaboration, information sharing, and risk management. Organizations can use these guidelines to strengthen their cybersecurity posture and contribute to a more secure cyberspace environment.

The IT and Security Leader’s Guide to ISO/IEC 27032

The IT and Security Leader’s Guide to ISO/IEC 27032: Building Cyber Resilience in the Digital Age

ISO/IEC 27032 is an international standard focusing on “Cybersecurity” or “Cyberspace Security,” which provides guidelines for enhancing the state of Cybersecurity, drawing attention to the roles and responsibilities of various stakeholders in cyberspace. 

As an IT and Security Leader, understanding and implementing this guidance could be essential for protecting the organization’s information assets.

Here is a brief guide to understanding and utilizing ISO/IEC 27032:

i. Understanding ISO/IEC 27032

A. Developed by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), ISO/IEC 27032 provides best practices for cybersecurity for cloud computing services.

B. Scope of the Standard: ISO/IEC 27032 addresses Cybersecurity risks and controls when it comes to the internet, cloud computing, and other platforms. It is not a certifiable standard but provides guidelines for ensuring secure operation in a collaborative, interconnected environment.

C. The standard outlines controls across 14 domains, encompassing aspects like risk assessment, security requirements, incident management, and business continuity.

D. Cybersecurity’s Broad Landscape: The standard acknowledges that Cybersecurity is a broader concept than information security since it encompasses the internet and extends to new electronic domains like the Internet of Things (IoT) and social media.

E. Stakeholder Collaboration: It emphasizes collaboration between stakeholders, including individuals, businesses, organizations, and governments. It ensures that there is clarity on roles and responsibilities to protect the shared cyber environment.

F. Guidelines: It offers comprehensive guidance on policies and procedures, aligning technical and management practices with cybersecurity’s unique demands.

ii. Implementing ISO/IEC 27032

A. Understanding ISO/IEC 27032:

   o Familiarize yourself and your team with the ISO/IEC 27032 standard. Understand its principles, objectives, and the role it plays in addressing cybersecurity challenges in cyberspace.

B. Risk Assessment and Management:

   o Conduct a thorough risk assessment specific to cyberspace, considering potential threats, vulnerabilities, and the impact of cyber incidents.

   o Implement risk management processes aligned with ISO/IEC 27032 to prioritize and address identified risks.

C. Establishing a Cybersecurity Policy:

   o Develop a comprehensive cybersecurity policy that aligns with ISO/IEC 27032 requirements.

   o Ensure that the policy addresses the organization’s commitment to cybersecurity, roles and responsibilities, and compliance with relevant laws and regulations.

D. Building a Cybersecurity Framework:

   o Create a cybersecurity framework based on the guidelines provided by ISO/IEC 27032.

   o Integrate the framework with existing IT and security management systems to streamline processes.

E. Engage Leadership and Stakeholders:

    o Ensure executive leadership support for cybersecurity initiatives.

    o Regularly communicate the importance of ISO/IEC 27032 compliance to stakeholders and demonstrate the positive impact on the organization.

F. Implement Controls: Deploy appropriate technical and organizational controls for the management of cyber risks including, but not limited to, encryption, access controls, penetration testing, and incident management frameworks.

G. Promote Cybersecurity Awareness: Cybersecurity is as much about people as it is about technology—initiate organization-wide awareness programs that inform and educate all stakeholders about cyber risks and best practices.

H. Encourage Collaboration: Facilitate collaboration internally and with external partners to ensure consistent adherence to Cybersecurity measures across all platforms and interactions.

I. Continuous Improvement: IT and security leaders must ensure that their cyber-security measures evolve over time. This includes regularly reviewing and updating policies and procedures, testing them to ensure they remain effective, and revising them when necessary because of changing circumstances or new threats.

J. Training and Awareness: It is fundamental to ensure that all employees, not just security personnel, are fully aware of the guidelines and their roles in maintaining cyber security. This can be achieved through regular training and updates.

K. Incident Response Planning: Develop an incident response plan that anticipates potential cyber incidents and outlines a clear response strategy to handle and recover from such events efficiently.

L. Compliance and Legal Considerations: Understand the legal implications related to Cybersecurity within your jurisdiction and ensure your policies comply with those laws.

M. Compliance Monitoring and Reporting:

   o Establish mechanisms for monitoring and reporting on compliance with ISO/IEC 27032.

   o Regularly review cybersecurity performance metrics and adjust strategies accordingly.

N. Cross-referencing with Other Standards: ISO/IEC 27032 may be used in conjunction with other standards such as ISO/IEC 27001 (ISMS) and ISO/IEC 27002 (code of practice for information security controls), ensuring a comprehensive approach to Cybersecurity.

iii. Benefits of Implementation

o Enhanced Cybersecurity Posture: Proactive risk management and mitigation strategies lead to a more secure cloud environment.

o Improved Data Protection: Implementing strong data security controls safeguards sensitive information in the cloud.

o Compliance and Regulation: Adherence to industry standards and regulations demonstrates commitment to data security and builds trust with customers and partners.

o Increased Business Continuity: Robust incident response and disaster recovery plans minimize disruptions and ensure continuous operations.

o Competitive Advantage: Demonstrating a proactive approach to cybersecurity can differentiate your organization and attract clients.

iv. Challenges and Considerations

o Resource Requirements: Implementing and maintaining ISO/IEC 27032 requires dedicated resources, including personnel with expertise in cloud security and compliance.

o Complexity of Cloud Environments: Adapting controls to dynamic and complex cloud environments can be challenging and require ongoing adjustments.

o Change Management: Transitioning to a new security framework necessitates effective change management strategies to overcome resistance and ensure widespread adoption.

v. Conclusion

Embracing ISO/IEC 27032 as IT and security leaders is not just a compliance exercise; it’s an investment in building a more secure and resilient future for your organization in the digital age. By understanding the benefits, key steps, and challenges, you can effectively navigate the implementation process and reap the rewards of enhanced cybersecurity posture, data protection, and sustainable business success.

As an IT and Security Leader, by aligning with ISO/IEC 27032, you not only protect the organization against Cybersecurity threats but also demonstrate commitment to best practices in Cybersecurity, which can enhance the trust and confidence of customers, stakeholders, and partners.

As IT and security leaders, navigating the ever-evolving cybersecurity landscape can be daunting. Fortunately, standards like ISO/IEC 27032 provide a valuable framework to build robust cybersecurity practices and mitigate evolving threats. This guide offers a comprehensive overview of how IT and security leaders can benefit from implementing ISO/IEC 27032:

Remember: ISO/IEC 27032 is a dynamic standard, so staying updated on revisions and emerging threats is crucial to maintain an effective cloud security posture. Continuous improvement and adaptation are key to achieving the full potential of this valuable framework.

vi. Further references 


https://www.linkedin.com/pulse/fundamentals-isoiec-27032-what-you-need-know-polyd-1c

https://www.itnewsafrica.com/2023/01/who-should-get-iso-iec-27032-certified-and-why-a-guide-for-it-and-security-leaders/

https://medium.com/@386konsult.com/iso-27032-guidelines-for-cybersecurity-management-cbb025267888

How to turn old datacentres into critical IT assets

IT infrastructure and operations leaders are increasingly focusing on new ideas and technologies alongside ways to deliver business value

How to turn old datacentres into critical IT assets

Armed with the right approach, legacy datacentre infrastructure can be reinvented to increase capacity, support new and emerging business services, and reduce operating costs.

As part of this approach, organisations with existing workloads remaining within their datacentres must decide how best to restructure their physical infrastructure to improve efficiencies and extend the datacentre’s useful life.

Most IT infrastructure and operations (I&O) leaders are dedicating their attention to cloud migrations, edge strategies and moving workloads closer to the customer. But they need to remember that a core set of workloads may remain on-premise. Although continued investment in an older, more traditional datacentre may seem contradictory, it can yield significant benefits to short-term and long-term planning.

There are three key ways that I&O leaders can optimise existing datacentres to support new and emerging business services – by enhancing the way they deliver, maximising the space and reinventing the infrastructure.

Enhancing delivery

In datacentres that are nearing operational capacity, the main limitation is a lack of physical space and power to support additional equipment or adequate cooling infrastructures. This results in companies either choosing to build a new, next-generation datacentre to support longer-term growth, or using colocation, cloud or hosting services as a solution.

Although these are viable options, they each entail moving workloads away from the traditional on-premise operation. This introduces risk and adds complexity to the operating environment. A possible alternative for long-term upgrades of existing datacentres is to use self-contained rack systems.

These manufactured enclosures contain a group of racks designed to support medium to high-compute densities. Often integrating their own cooling mechanism, retrofitting or repurposing high-density computing self-cooling racks can be a simple and effective way to improve datacentre space.

Maximise space

Clearing out a small section of floorspace for one of these self-contained units is the least intrusive retrofit technique. You can then break floorspace into discrete sections and reconfigure your arrangement.

Self-contained rack units will typically require power from an existing power distribution unit or, in some cases, may require a refrigerant or cooling distribution unit. Assume an increase in per-rack space of about 20% to take into account additional supporting equipment.

Because in-rack cooling systems are self-contained, they don’t require a hot aisle/cold aisle configuration or containment. This will enable more flexibility in the placement of the new racks on the datacentre floor.

Once the unit is installed, begin a phased migration of workloads from other sections of the floor. This is not a one-for-one migration, because these rack units can support higher cooling densities. Often, an existing datacentre will only utilise on average 50-60% of rack capacity because higher-density racks cause hotspots on the floor.

With self-contained racks, the amount of workload migrated is often 40-50% greater. For example, a new, self-contained four-rack unit might absorb the workloads from between six and eight racks on the existing floor.

Therefore, workloads moved to the new enclosure are unlikely to come from the same racks, and the older section of the server area will be heavily fragmented. The next phase in the project entails defragmenting the environment and moving workloads out of under-utilised racks to free up additional floorspace.

Once these workloads are moved, the process of physically relocating equipment and clearing out the next section of floorspace to make room for the next self-contained rack installation can begin. As each subsequent unit is installed, the overall density of computing per rack increases, resulting in a significantly higher compute-per-square-foot ratio and a smaller overall datacentre footprint.

This migration phase might also be an excellent time to consider a server refresh, depending on where existing servers are in their economic lifecycles. Implementing smaller server form factors can increase rack density while reducing overall power and cooling requirements.

Key to all of this remains the input power to the datacentre – it must be adequate for the higher-density racks. One offsetting benefit is that the overall cooling load can decrease as more workloads move to high-density racks, because much of the cooling airflow is handled inside the rack, reducing the amount of airflow needed across the entire datacentre space.

Reinvent infrastructure

While new chip designs attempt to lower the heat footprint of processors, increased computing power requirements lead to higher equipment densities – which, in turn, increases cooling requirements. As the number of high-density servers grows, I&O leaders must provide adequate cooling levels for computer rooms.

Those looking to retrofit datacentres for extreme densities in a small footprint, perhaps for quantum computing or artificial intelligence (AI) applications, should consider liquid or immersive cooling systems as viable options. Gartner predicts that by 2025, datacentres deploying speciality cooling and density techniques will see 20-40% reductions in operating costs. This topic will be further discussed at the Gartner IT Infrastructure, Operations & Cloud Strategies Conference in November.

It can take as much as 60-65% of the total power used to cool a datacentre. Higher-density racks of 15kW to 25kW can often require more than 1.5kW of cooling load for every 1kW of IT load, just to create the cool airflow needed to support those racks.

Rear-door heat exchangers (RDHx) are field-replaceable rack doors (in most instances) that cool the hot air as it exits the rack door, rather than relying on airflow in the datacentre. One benefit of the RDHx is that not only do you have more efficient racks, but much of the power once used for cooling becomes available for reuse by facilities to support other building systems or rerouted as additional IT load. RDHx suppliers include Futjitsu, Vertiv, Schneider Electric, Nortek Air Solutions, Cool IT Systems and Opticool.

Using liquid cooling can solve the high-density, server-cooling problem, because water (conductive cooling) conducts more than 3,000 times as much heat as air and requires less energy to do so. Liquid cooling enables the ongoing scalability of computing infrastructure to meet business needs.

It may not be obvious that RDHx can save money, so customers must be willing to build the business case. Depending on heat load and energy costs, return on investment (ROI) can be attained within a few years. In many cases, previously planned facilities upgrades (with typical ROI of between 15 and 20 years) may not be required.

Most suppliers have recently started providing a refrigerant solution, instead of water. A low-pressure refrigerant can alleviate water leakage concerns because, if leaks occur, refrigerants can boil off as non-toxic, non-corrosive gases. Although this may add extra cost for coolant distribution units, it will remove any worry of water leaks damaging the equipment.

Immersive cooling systems are also gaining acceptance, especially where self-contained, high-density (40-100kW and beyond) systems are needed.

Direct immersion and liquid cooling systems are now available. They can be integrated into existing datacentres with air-cooled servers. However, adoption has been slow, considering the heavy investment in mechanical cooling methodologies and the continually improving power efficiency of modern systems. Immersive cooling suppliers include Green Revolution Cooling, Iceotope, LiquidCool, TMGCore and Stulz.

Because every environment is different, it is critical for I&O leaders to use detailed metrics, such as power usage efficiency (PUE) or datacentre space efficiency (DCSE), to estimate the benefits and unique cost savings from such investments.

The bottom line

I&O leaders may attain significant growth in their existing facilities by implementing a phased datacentre retrofit, while reducing the cooling requirements and freeing up power for additional IT workloads.

This activity is not without risk, because any physical equipment move within a live production datacentre is risky. However, if executed as a long-term project, and broken into small, manageable steps, the benefits can be far-reaching and far outweigh the risks.

From a budgeting point of view, it is also easier to implement, because the capital requirements are spread over multiple quarters, versus a traditional datacentre build project. Also, the overall costs will be significantly less than a new build, with many of the same benefits.

As I&O leaders begin to enhance their datacentres to support new business services and reduce operating costs, they should keep this step-by-step approach in mind.

https://www.computerweekly.com/feature/Gartner-How-to-turn-old-datacentres-into-critical-IT-assets?utm_campaign=20211007_ERU+Transmission+for+10%2F07%2F2021+%28UserUniverse%3A+335905%29&utm_medium=EM&utm_source=ERU&src=8387650&asrc=EM_ERU_184082271&utm_content=eru-rd2-rcpG

Mainstream adoption of SDN, SD-WAN finally arrives

Mainstream adoption of SDN, SD-WAN finally arrives

top IT initiatives of enterprise network managers are cloud and software-defined data centers

The top IT initiatives of enterprise network managers are cloud and software-defined data centers, overcoming server virtualization’s dominance for the past 10 years, according to Enterprise Management Associates.

n 2018, for the first time cloud and software-defined data-center concerns have become the primary focus of enterprise network teams, bumping server virtualization from the top spot, according to an Enterprise management Associates (EMA) report based on a survey of 251 North American and European enterprise network managers

This is the first shift in their priorities for in more than a decade. Since 2008, EMA has been asking network managers to identify the broad IT initiatives that drive their priorities. Server virtualization has dominated their responses year after year. Cloud and software-defined data center (SDDC) architectures have always been secondary or tertiary drivers.

In 2018, this pattern has finally broken, according to EMA’s “Network Management Megatrends 2018” research. This shift in drivers is also leading to mainstream focus on software-defined networking (SDN), network virtualization and software-defined wide-area networking (SD-WAN).

Server virtualization and workload consolidation

Before there was vCloud, OpenStack, Amazon Web Services, Microsoft Azure or even ESXi, there was simply VMware ESX. At its outset, VMware’s server-virtualization technology allowed enterprises to run more than one application on an x86 host, which allowed data center operators to consolidate workloads onto a smaller number of servers. Virtualization allowed enterprises to remove hardware, save rack space and drive efficiency in power and cooling. Virtualization also facilitated the decomposition of monolithic applications into multi-tiered application architectures, where different layers of an application could run on separate virtual machines.

This consolidation of workloads and the decomposition of applications that followed had profound impacts on networks. Bandwidth demand at the server access layer expanded, and the amount of traffic traveling east-west between servers exploded. Network engineers have spent the last decade reacting to this, building flatter, leaf-spine networks and replacing spanning tree protocol with equal-cost multi-pathing schemes.

That is why network managers have told EMA over and over again since 2008 that the broad IT initiative that most drives networking is server virtualization by a wide margin.

2018: software-defined and cloud architectures

While server virtualization remains influential on networking in 2018 (35% of network managers), it is no longer the top driver. This year, several other initiatives moved into a virtual tie with it, which indicates that mainstream enterprises are asking the network to support next-generation technologies. EMA asked network managers to identify all broad IT initiatives that were driving their priorities. The top driver in 2018 is SDDC architecture (37%). Other leading initiatives are infrastructure as a service (35%) and private cloud architecture (35%). Note that all these newly influential initiatives are ideas that draw on server virtualization as a foundational technology.

This data suggests an inflection point. For years, technology companies anxious to sell next-generation solutions have claimed this inflection point had already arrived, despite limited supporting evidence. Finally, EMA sees this shift in the numbers. Network managers are turning their focus away from east-west traffic optimization in the data center to programmable, software-defined technologies.

SDN and SD-WAN finally have their day

Network teams will naturally adopt new technologies to support these next-generation initiatives, and EMA research affirms this. In 2014 and 2016, network managers told us that network security and WAN optimization were the two most important networking initiatives they were dealing with.

Security is always a major concern, since the threat landscape is always evolving and challenging the security team. But WAN optimization is by no means a technology of the future. Its popularity always suggested that enterprises were focused on extracting more value out of their high-priced and bandwidth-constrained MPLS networks.

Network teams tasked with supporting cloud and SDDC architecture need to expand their technology focus. This year’s Megatrends research found just that. Network security (43%) remains at the top of the list, but data center SDN (40%), network virtualization (37%) and SD-WAN (36%) are near the top of the list. In previous years they were afterthoughts.

This shift in priorities suggests that mainstream enterprises are at last focused on these solutions after years of hype. Data center SDN and network virtualization are essential to private cloud and SDDC initiatives. They make the network more dynamic, agile and programmatic. Meanwhile, SD-WAN brings many of the same benefits to the WAN, and it facilitates the connection of remote sites to public-cloud environments.

WAN optimization (36%) remains relevant, but the rise of SD-WAN suggests that a tectonic shift is happening in the WAN, since SD-WAN enables enterprises to supplement or replace MPLS with broadband internet.

https://www.networkworld.com/article/3273426/data-center/survey-mainstream-adoption-of-sdn-sd-wan-finally-arrives.html

Enterprise Risk Management for Cloud Computing

Enterprise Risk Management for Cloud Computing

Enterprise Risk Management for Cloud Computing

As defined in COSO’s 2004 Enterprise Risk Management – Integrated Framework: “Risk is the possibility that an event will occur and adversely affect the achievement of objectives.”
The types of risks (e.g., security, integrity, availability, and performance) are the same with systems in the cloud as they are with non-cloud technology solutions.

An organization’s level of risk and risk profile will in most cases change if cloud solutions are adopted (depending on how and for what purpose the cloud solutions are used). This is due to the increase or decrease in likelihood and impact with respect to the risk events (inherent and residual) associated with the CSP that has been engaged for services.

Some of the typical risks associated with cloud computing are:

  • Disruptive force – Facilitating innovation (with increased speed) and the cost-savings aspects of cloud computing can themselves be viewed as risk events for some organizations. By lowering the barriers of entry for new competitors, cloud computing could threaten or disrupt some business models, even rendering them obsolete in the future. For example, streaming media over the Internet was a technology solution that significantly reduced the sales of CDs and DVDs and the need for physical retail stores. Existing competitors that fully embrace the cloud might be able to bring new ideas and innovation into their markets faster. Since cloud computing solutions yield considerable short-term cost savings due to reduced capital expenditures, an organization adopting the cloud might be able to extract better margins than its non-cloud competitors. Thus, when an industry member adopts cloud solutions, other organizations in the industry could be forced to follow suit and adopt cloud computing.
  • Residing in the same risk ecosystem as the CSP and other tenants of the cloud – When an organization adopts third-party-managed cloud solutions, new dependency relationships with the CSP are created with respect to legal liability, the risk universe, incident escalation, incident response, and other areas. The actions of the CSP and fellow cloud tenants can impact the organization in various ways. Consider the following:
    1. Legally, third-party cloud service providers and their customer organizations are distinct enterprises. However, if the CSP neglects or fails in its responsibilities, it could have legal liability implications for the CSP’s customer organizations. But if a cloud customer organization fails in its responsibilities, it is less likely there would be any legal implications to the CSP.
    2. Cloud service providers and their customer organizations are likely to have separate enterprise risk management (ERM) programs to address their respective universe of perceived risks. Only in a minority of cases (involving very high-dollar contracts) will CSPs attempt to integrate portions of their ERM programs with those of their customers. The universe of risks confronting an organization using third-party cloud computing is a combination of risks the individual organization faces along with a subset of the risks that its CSP is facing.
  • Lack of transparency – A CSP is unlikely to divulge detailed information about its processes, operations, controls, and methodologies. For instance, cloud customers have little insight into the storage location(s) of data, algorithms used by the CSP to provision or allocate computing resources, the specific controls used to secure components of the cloud computing architecture, or how customer data is segregated within the cloud.
  • Reliability and performance issues – System failure is a risk event that can occur in any computing environment but poses unique challenges with cloud computing. Although service-level agreements can be structured to meet particular requirements, CSP solutions might sometimes be unable to meet these performance metrics if a cloud tenant or incident puts an unexpected resource demand on the cloud infrastructure.
  • Vendor lock-in and lack of application portability or interoperability – Many CSPs offer application software development tools with their cloud solutions. When these tools are proprietary, they may create applications that work only within the CSP’s specific solution architecture. Consequently, these new applications (created by these proprietary tools) might not work well with systems residing outside of the cloud solution. In addition, the more applications developed with these proprietary tools and the more organizational data stored in a specific CSP’s cloud solution, the more difficult it becomes to change providers.
  • Security and compliance concerns – Depending on the processes cloud computing is supporting, security and retention issues can arise with respect to complying
    with regulations and laws such as the Sarbanes-Oxley Act of 2002 (SOX), the Health Insurance Portability and Accountability Act of 1996 (HIPAA), and the various data privacy and protection regulations enacted in different countries. Examples of these data privacy and protection laws would include the USA PATRIOT Act, the EU Data Protection Directive, Malaysia’s Personal Data Protection Act 2010, and India’s IT Amendments Act. In the cloud, data is located on hardware outside of the organization’s direct control. Depending on the cloud solution used (SaaS, PaaS, or IaaS), a cloud customer organization may be unable to obtain and review network operations or security incident logs because they are in the possession of the CSP. The CSP may be under no obligation to reveal this information or might be unable to do so without violating the confidentiality of the other tenants sharing the cloud infrastructure.
  • High-value cyber-attack targets – The consolidation of multiple organizations operating on a CSP’s infrastructure presents a more attractive target than a single organization, thus increasing the likelihood of attacks. Consequently, the inherent risk levels of a CSP solution in most cases are higher with respect to confidentiality and data integrity.
  • Risk of data leakage – A multi-tenant cloud environment in which user organizations and applications share resources presents a risk of data leakage that does not exist when dedicated servers and resources are used exclusively by one organization. This risk of data leakage presents an additional point of consideration with respect to meeting data privacy and confidentiality requirements.
  • IT organizational changes – If cloud computing is adopted to a significant degree, an organization needs fewer internal IT personnel in the areas of infrastructure management, technology deployment, application development, and maintenance. The morale and dedication of remaining IT staff members could be at risk as a result.
  • Cloud service provider viability – Many cloud service providers are relatively young companies, or the cloud computing business line is a new one for a well- established company. Hence the projected longevity and profitability of cloud services are unknown. At the time of publication, some CSPs are curtailing their cloud service offerings because they are not profitable. Cloud computing service providers might eventually go through a consolidation period. As a result, CSP customers might face operational disruptions or incur the time and expense of researching and adopting an alternative solution, such as converting back to in-house hosted solutions.

In addition to these risks, certain characteristics of cloud computing may give rise to other less apparent challenges that warrant evaluation.

Some management teams may be willing to accept the risks of running their entire enterprise in a public cloud given the small up-front capital investment requirements. Start-ups and venture capitalists are likely to prefer focusing their investments on the business model rather than a technology infrastructure that would be of limited value if the venture were to fail. Start-ups can deploy their business models supported by cloud solutions more quickly and more economically in comparison to the previous generation of technology options.

All of the cloud computing risks discussed here should be given careful consideration (that is, undergo a risk assessment), as the materialization of any of these risks will present very undesirable consequences. Many of the risks highlighted here are not likely to be mitigated by contractual clauses with a CSP (assuming the contract is even negotiable – most commodity cloud contracts are not). Consequently, mitigation solutions may need to be implemented outside of the immediate cloud solution provided by the CSP.

https://www.coso.org/Documents/Cloud-Computing-Thought-Paper.pdf

Statistics Forecast Future of IoT

Statistics Forecast Future of IoT

internet_of_things

When we started this decade, the Internet of Things was a basically a buzzword, talked about by a few, acted upon by fewer, a challenge to save for the future, like 2015 or 2020.

But as a famous character once said in a movie that’s now 30 years old, “life moves pretty fast…” and now, here we are with 2015 in the rear view mirror and our 2020 vision becoming clearer by the minute.

Everyone’s talking about the Internet of Things, even the “things,” which can now request and deliver customer support, tell if you’re being as productive as you could be at work, let your doctor know if you’re following orders (or not), reduce inefficiencies in energy consumption, improve business processes, predict issues and proactively improve or resolve them based on data received.

The Internet of Things (IoT) is just getting started. These forecasts below show why organizations need to get started too (if they haven’t already) on leveraging and responding to the Internet of Things:

1. The worldwide Internet of Things market is predicted to grow to $1.7 trillion by 2020, marking a compound annual growth rate of 16.9%. – IDC Worldwide Internet of Things Forecast, 2015 – 2020. 

2. An estimated 25 billion connected “things” will be in use by 2020. – Gartner Newsroom

3. Wearable technology vendors shipped 78.1 million wearable devices in 2015, an increase of 171.6% from 2014. Shipment predictions for this year are 111 million, increasing to 215 million in 2019. – IDC Worldwide Quarterly Wearable Device Tracker

4. By 2020, each person is likely to have an average of 5.1 connected devices. – Frost and Sullivan Power Management in IoT and Connected Devices

5. In a 2016 PwC survey of 1,000 U.S. consumers, 45% say they now own a fitness band, 27% a smartwatch, and 12% smart clothing. 57% say they are excited about the future of wearable technology as part of everyday life. 80% say wearable devices make them more efficient at home, 78% more efficient at work. – PwC The Wearable Life 2.0: Connected Living in a Wearable World 

6. By 2020, more than half of major new business processes and systems will incorporate some element, large or small, of the Internet of Things. – Gartner Predicts 2016: Unexpected Implications Arising from the Internet of Things 

7. 65% of approximately 1,000 global business executives surveyed say they agree organizations that leverage the internet of things will have a significant advantage; 19% however, still say they have never heard of the Internet of Things. – Internet of Things Institute 2016 I0T Trends Survey 

8. 80% of retailers worldwide say they agree that the Internet of Things will drastically change the way companies do business in the next three years. – Retail Systems Research: The Internet of Things in Retail: Great Expectations 

9. By 2018, six billion things will have the ability to request support. – Gartner Predicts 2016: CRM Customer Service and Support

10. By 2020, 47% of devices will have the necessary intelligence to request support. – Gartner Predicts 2016: CRM Customer Service and Support 

11. By 2025, the Internet of Things could generate more than $11 trillion a year in economic value through improvements in energy efficiency, public transit, operations management, smart customer relationship management and more. – McKinsey Global Institute Report: The Internet of Things: Mapping the value behind the Hype

12. Barcelona estimates that IoT systems have helped the city save $58 million a year from connected water management and $37 million a year via smart street lighting alone. – Harvard University Report 

13. General Electric estimates that the “Industrial Internet” market (connected industrial machinery) will add $10 to $15 trillion to the global GDP within the next 20 years. – GE Reports 

14. General Electric believes that using connected industrial machinery to make oil and gas exploration and development just 1% more efficient would result in a savings of $90 billion. – GE Reports 

15. The connected health market is predicted to grow to $117B by 2020. Remote patient monitoring is predicted to be a $46 billion market by 2017. – ACT Report 

16. Connected homes will be a major part of the Internet of Things. By 2019, companies will ship 1.9 billion connected home devices, marking an estimated $490 billion in revenue (Business Insider Intelligence). By 2020, even the connected kitchen will contribute at least 15 percent savings in the food and beverage industry, leveraging data analytics. – Gartner Research Predicts 2015: The Internet of Things

The Internet of Things is accelerating the transformation of the way we live and work. Life move pretty fast. Stop and look around, but don’t miss it.

Cisco-VMware Towards Closer SDN Relationship

Cisco-VMware Towards Closer SDN Relationship

NSX-Cisco_NSX-Cisco-traffic-flow-multi-host

Cisco Systems has had a testy relationship with its longtime partner VMware in the battle for software-defined networking customers, but CEO Chuck Robbins suggested Thursday that this may be changing.

As CRN reported last month, several joint customers have begun deploying Cisco’s and VMware’s SDN technologies side by side [3] to solve specific business challenges. On VMware’s earnings call in April, CEO Pat Gelsinger said SugarCreek and Shutterfly are examples of customers that are using both versions of SDN.

Asked whether Cisco and VMware might consider working together to make it easier for customers to use the two technologies in tandem, Robbins said the vendors are actively exploring such an arrangement.

“As it relates to VMware, I think our teams are talking about where there might be points that balance the competitive nature of the partnership, but also meet perhaps some of the emerging customer asks. So I think it that’s to be determined,” Robbins told CRN.

In the interview, Robbins compared the SDN situation with VMware to Cisco’s longstanding competition with Microsoft in the unified communications space.

“Our customers have wanted us to drive greater interoperability in our collaboration portfolios, and I think that’s something both of us should consider and we’re having conversations about it,” Robbins said of Microsoft.

Cisco’s SDN offering, called Application Centric Infrastructure (ACI), is a software-hardware mix that includes Nexus 9000 switches and its Application Policy Infrastructure Controller (APIC). VMware’s offering, called NSX, is a software-only technology.

While Robbins has also criticized NSX as being unfit for enterprises, the Cisco-VMware relationship has seen a noticeable thaw since he took the helm last July, according to several partners that work with both vendors.

Faisal Bhutto, vice president of enterprise networking, cloud and cybersecurity at Computex Technology Solutions, said customers are growing increasingly comfortable with deploying both NSX and ACI within their organizations.

“Twelve to 18 months down the road, we will see customers running both [NSX and ACI] in their environments,” said Bhutto. “However, some environments may be isolated and running autonomously. Customers will ultimately win if Cisco and VMware build the bridge technologies to talk to each other.”

Bhutto said he’s hoping to see some sort of formal integration between NSX and ACI. “I’m glad that the trash talk has stopped. But ultimately, this has to develop into actionable integration and collaboration, like Cisco and VMware used to have,” Bhutto said.

Ross Brown, VMware’s senior vice president of worldwide partners and alliances, told CRN his company has always been open to working with Cisco and other networking vendors on SDN integration.

“VMware has maintained for some time now that NSX and ACI are not competitive, and even partners and customers are proving this out in the market today,” Brown said in an email. “Since introducing NSX, we have welcomed the opportunity to partner with all of the major networking hardware vendors.”

Most industry watchers believe there is plenty of room for both Cisco and VMware to thrive in the SDN market, and both are seeing strong growth for their respective SDN offerings.

Cisco, in its third quarter earnings Wednesday, said ACI revenue grew 100 percent year over year and is now on a $2.2 billion annualized run rate. VMware said NSX license bookings grew 100 percent year over year last quarter, but the vendor hasn’t updated the $600 million run rate figure for NSX it announced in January.

Cisco-VMware Towards Closer SDN Relationship

EMC launches DD VE

EMC launches DD VE, virtual edition of Data Domain

EMC-Data-Domain

EMC unveils the virtual edition of its Data Domain disk library, DD VE, allowing the Data Domain operating system to run inside a VMware hypervisor and back up to any hardware.

EMC today released a software-only version of its Data Domain disk backup product to handle data deduplication and replication to industry-standard hardware.

Data Domain is EMC’s market-leading disk backup library platform. Data Domain Virtual Edition (DD VE) decouples the software from the Data Domain hardware. DD VE is a virtual appliance that installs inside a VMware hypervisor and uses the Data Domain operating system to reduce data capacity during backups. Customers supply the target hardware and still need backup software, just as they do with physical Data Domain appliances.

A DD VE license includes Data Domain’s inline deduplication, replication, encryption and DD Boost for faster backups.

DD VE’s use cases, product details

EMC is pitching DD VE mainly for remote offices, although it can also be used by cloud providers and to protect data on hyper-converged systems.

DD VE protects from 1 TB to 16 TB of data, so it can conceivably replace the smallest Data Domain physical library in an organization, but not the larger libraries, which scale to hundreds of terabytes. Customers can scale DD VE in 1 TB increments.

At $1,675 per terabyte, the raw cost of the software version is no less than the physical appliances. For instance, a 4 TB DD VE costs $6,700, compared with the list price of $6,806 for 4 TB of capacity on a DD2200. For 16 TB of DD VE, the cost is $26,000, compared with the $23,027 list price for the maximum of 17.2 TB on a DD2200. The DD VE pricing doesn’t include hardware.

But DD VE allows customers to install and scale Data Domain software in use cases where it was not previously available or convenient. It could save customers money when backing up small amounts of data in many sites, instead of installing a Data Domain physical library in each one. A DD VE license can be spread among sites or hardware appliances.

“Being software-defined makes the world much more accessible for us,” said Caitlin Gordon, EMC’s director of product marketing for data protection. “Software-defined storage can lead to a lot of exciting things, like the ability to deploy in a cloud. Software-defined means you can deploy it anywhere.”

EMC is offering electronic licensing for DD VE. Customers can buy and license the amount of capacity they want by downloading OVS files. Starting April 22, EMC will offer a 0.5 TB try-and-buy trial for nonproduction use. The free trial version has only community support.

The software will do an assessment of the target hardware when it installs to make sure it is compatible. EMC will also publish a hardware compatibility list. EMC recommended customers use DD VE with a RAID 6 scheme.

There is no support for deduplication across devices in the original release.

Gordon said EMC will expand capacity for DD VE “relatively quickly,” as well as add features.

DD VE opens ‘new markets’ for EMC

The DD VE release is no surprise. EMC executive Guy Churchward spoke of EMC running it internally in an interview nearly two years ago, and EMC Information Infrastructure CEO Dave Goulden said in January the product would be generally available soon. Dell, Hewlett Packard Enterprise and Quantum already have virtual backup appliances.

Gordon said EMC has been running DD VE in the lab for so long that the initial release is actually DD VE 2.0. DD VE has also had a long customer technical evaluation program.

“For EMC to deliver a virtual appliance wasn’t overly challenging, but it does open new markets for them,” said Jason Buffington, principal analyst at Enterprise Strategy Group Inc., in Milford, Mass. “This unblocks some of the more edge or fringe scenarios for people who may have wanted a Data Domain box, but wanted a different form factor.”

Buffington said hyper-converged customers and managed service providers(MSPs) are good DD VE candidates, along with remote offices.

“If you put all your eggs in a hyper-converged basket, do you really want another basket — a physical Data Domain — next to it?” he said. “If you already believe in virtual environments, why not put DD VE in?”

Buffington said the attraction for MSPs is they don’t have to try and sell customers on a multi-tenant setup. They can give each customer a dedicated DD VE for their data.

A successful test case for DD VE

Ben Leggo, general manager of cloud services for Australian managed services provider Tecala, said he has tested DD VE for four months with a variety of hardware and backup software. He said the virtual version worked as well as the software in his physical Data Domain boxes, and he intends to put it in production for several data protection services.

“Our principal use case will be for branch office backup,” Leggo said. “It’s quite expensive to deploy hardware [at branch offices] for the amount of data you want to back up. If a customer’s already virtualized, we can deploy this quite cost-effectively and replicate out to physical Data Domain devices.”

Tecala has DD4500 libraries in its data center and DD2200s at customer sites. Leggo said when EMC supports 70 TB with DD VE, he will consider replacing physical Data Domains in the data center. He said he is also looking for EMC to add public cloud support to DD VE so Tecala can offer services that back up data to Microsoft Azure.

Gartner distinguished analyst Dave Russell said DD VE can be a quick way to configure disk backup for small implementations, but customers would probably not want to scale it too high on their own hardware.

“This is good for rapid deployment and rapid redeployment of resources,” he said. “Today’s project might be 10 remote offices, but two quarters from now, it might be something different and you can reconfigure DD VE on the fly. But software-defined puts some of the onus on the IT shop to be a product integrator themselves. The bigger the platform capacity, the more onus on the customer to deploy it on the right kind of hardware.”

EMC launches DD VE, virtual edition of Data Domain

Leaf-Spine Network Architecture

Leaf-Spine Network Architecture

leaf-spine-network-architecture

Leaf-spine is a two-layer network topology composed of leaf switches and spine switches. Servers and storage connect to leaf switches and leaf switches connect to spine switches. Leaf switches mesh into the spine, forming the access layer that delivers network connection points for servers. Spine switches have high port density and form the core of the architecture.

Every leaf switch in a leaf-spine architecture connects to every switch in the network fabric. No matter which leaf switch a server is connected to, it has to cross the same number of devices every time it connects to another server. (The only exception is when the other server is on the same leaf.) This minimizes latency and bottlenecks because each payload only has to travel to a spine switch and another leaf switch to reach its endpoint.

A leaf-spine topology can be layer 2 or layer 3 depending upon whether the links between the leaf and spine layer will be switched or routed. In a layer 2 leaf-spine design, Transparent Interconnection of Lots of Links (TRILL) or shortest path bridging takes the place of spanning-tree. All hosts are linked to the fabric and offer a loop-free route to their Ethernet MAC address through a shortest-path-first computation. In a layer 3 design, each link is routed. This approach is most efficient when virtual local area networks are sequestered to individual leaf switches or when a network overlay, like VXLAN, is working.