Category Archives: Best Practices

Breach Prevention – Lessons Learned

Breach Prevention – Lessons Learned

breach-prevention

Sony Pictures Entertainment. eBay. Target. JPMorgan Chase. Home Depot. Community Health Systems. The top breaches of 2014 were front-page news. And some were even career-killers for top executives.

“2014 was a tipping point for cybersecurity,” says Kennet Westby, co-founder and president of cyber-risk management firm Coalfire. Attacks were “more widespread and costly than any we’ve seen before.”

Here are five breach prevention tips based on the lessons learned from 2014’s major data breaches.

1. Manage Threats from Top Down

The high-profile data breaches of 2014 have helped to raise awareness among senior executives and board members of cyberthreats and the need for them to be managed from the top down, Westby says.

“This is new,” he says. “We used to get a few dozen calls a year for cybersecurity management from an IT director or mid-level manager, and we had to fight an uphill battle to get attention from the top. Now we’re hearing from board members and CEOs.”

One catalyst for that rise in awareness among senior executives was Target’s corporate restructuring following the company’s breach, which impacted 40 million payment card numbers and the personal details of 70 million customers (see: 7 Lessons from Target’s Breach). Gregg Steinhafel resigned as Target’s chairman, president and CEO. Following his departure, the retailer made several high-profile hires, including tapping former PepsiCo executive Brian Cornell as new CEO and Jacquelin Hourigan Rice as senior vice president and chief risk and compliance officer, reporting directly to the CEO.

Target’s hires could inspire other companies to restructure their organization charts to give more authority to the executive in charge of data protection, says Francoise Gilbert, founder and managing director of the IT Law Group. “If companies can follow the leadership of Target and, on their own terms and with their own budget, pay more attention to the protection of personal information and company data, they will quickly see the return on their investment,” she says.

Boards and the C-suite need to look at cybersecurity as a continuous risk management function, Westby explains. “Understand what compliance measures need to be taken to achieve a basic level of security,” he says. “[Then], determine what steps need to be taken for ongoing planning and testing.”

In addition, having proper cyberthreat management requires the C-suite to create a culture of security within an organization, says Neal O’Farrell, executive director of the Identity Theft Council. “My No. 1 piece of advice is that vocal paranoia is vital,” he says. “Worry and stress about security all the time, and talk about it constantly. That’s the best chance you have for creating that culture of security that’s so critical to avoiding the predictable and avoidable mistakes.”

2. Ramp Up Employee Training

Mistakes made by employees are a common cause of major breaches and point to the need for ramped-up security training for all users, says Michael Bruemmer, vice president of Experian Data Breach Resolution.

“In 2014, we serviced just over 3,100 data breach incidents,” he says. “About 80 percent of the root causes that were documented came from employee negligence.” The No. 1 cause, Bruemmer says, was compromised administrative credentials that allowed for easy access through organizations’ cybersecurity defenses.

End users will continue to remain a weak link due to social engineering, phishing and credential compromise, says Julie Conroy, a security analyst at Aite Group. As a result, organizations need to invest in user training and also “run simulated phishing attacks to help users learn via real-world scenarios,” she says.

Every individual with access to information assets should receive training as soon as possible once starting their job, privacy and security expert Rebecca Herold says. In addition, they should receive security reminders and tips on an ongoing basis. “Otherwise their bad security habits will result in breaches,” she says.

3. Monitor Third Parties

Third parties are often the source of a data breach, as major incidents last year demonstrated.

Home Depot’s breach, which affected 56 million credit and debit card numbers, resulted from the compromise of a third-party vendor, a fact that is “eerily” similar to the circumstances of the Target breach, experts say. These incidents point to the need for ongoing third-party oversight, Herold says.

“Every organization contracting any type of entity to do work for them that has access to their information assets needs to ensure the entity has effective information security controls in place, and provide appropriate levels of ongoing oversight to the contracted entity to ensure they maintain appropriate levels of security controls,” Herold says.

4. Establish Procedures for Security Updates

All organizations need to have documented procedures for implementing new and updated systems, applications and endpoints – and ensure that those procedures are carried out, Herold says, a lesson gleaned from the JPMorgan Chase breach.

In that incident, the breach likely started with a server the bank’s security team overlooked when upgrading to two-factor authentication controls (see: Chase Attackers Exploited Basic Flaws).

“The Chase breach clearly shows how important, basic checks were overlooked during such upgrades,” Herold says. “Were there any documented upgrade procedures in place? If yes, were they being followed? Was there a person or position given responsibility for overseeing them?”

5. Schedule Frequent Penetration Tests

Many large enterprises have sprawling IT infrastructures that have been cobbled together over time, which makes them difficult to secure, says Aite Group’s Conroy. So it’s important to hire penetration testers to review systems and their various endpoints for weaknesses.

“If you are not investing in white hat hackers to pen test your systems, this needs to be added to your plan, because if you are not proactively finding the weaknesses in your system, the bad guys will,” she says.

And having systems penetration-tested once a year is not enough, Conroy warns. “We’ve seen time and time again that new development cycles lead to new gaps in security.”

ERP in Retail Industry

ERP in Retail Industry

Abstract

In today’s world, information systems have become integral part of any organization’s long term vision. It Is not Just considered as an enabler. These systems have become part of strategic decisions and play a very crucial role in organization‘s success or failure.

In the past two decades, Retail Industry across the globe had seen major IT transformational programs and most of these programs involved migration from legacy application to high end ERP software.

ERP in Retail Industry

The term ‘ERP‘ refers to the business software that has been designed to record and manage enterprise data for any organization. An ERP System automates and integrates core business processes and typically uses a central database that holds all the data relating to the various system modules.

ERP systems comprise of different module such as order entry, purchase, sales, finance, inventory management, production planning and human resources. The components are designed to work effortlessly with the rest of the system and provide a consistent user interface throughout the system.

ERP sofiware packages have an enterprise wide reach that offers cross-functional capabilities to the organization. The different functional departments involved in the operations or inventory processes are integrated into a single system. An ERP package takes care of the various business processes such as order entry, logistics and warehousing. It also caters to the different business functions such as accounting, marketing, strategic management and the human resource management.

ERP Modules

Retail Industry challenges

1. Changing Customer behavior – Across the globe the change in customer behavior had become eminent and visible. Todays customer asks for personalization when he/she is shopping and this demand is not just limited to prices but is also extended to personalized products, offers and services. with this kind of demands from customers, retailers in today’s world need to be more dynamic in their analysis of data, need to be more efficient and flexible in their offerings to customers.

2. Competition – To remain competitive, retailers need to understand consumer demand at each point of interaction and respond to the various inputs in real time across the enterprise. This needs a proactive approach on the part of the retail organization to sense the specific requirements of the consumer before other competitors and respond to them in real-time ensuring customer satisfaction in the process.

Moreover, margins in the retail business generally are very low and that removes any scope for waste or inefficiencies in the business processes. Efficiency is critical to survive in the retail industry. A proactive approach on the part of the retail organization requires an enterprise wide monitoring and control of the various business processes that may ultimately lead to the desired efficiencies and long-term customer satisfaction and profitability.

Socioeconomic environment and adherence to Regulations – In order to meet the regulatory standards, the retailers require an enterprise wide process visibility, data access and near-instant performance reporting. However, the need for flexibility, process efficiency, reliable information, and responsiveness is very hard to achieve given the existing portfolio of legacy,homegrown and packaged software applications used by a majority of the retail organizations.

Therefore, a retail business would benefit immensely from an integrated Information System infrastructure which continuously monitors and dispatches the necessary information of the flaw of goods all the way from supplier to the cash register and then had to accounting as well as other functions of tie retail organization.

A combination of flexibility, process efficiency, reliable information and responsiveness is critical to a retail business and ERP package have been introduced to tackle the elimination of IT complexity albeit with some implementation challenges to the line of business and IT management staff.

Integration of the various business functions is an essential prerequisite for synchronization among the different business activities involved in a retail business. A number of large retail chains around the world have already invested in packaged software suites to integrate their core business activities. However, a lot of retailers are still using fragmented legacy software applications to manage their core business functions which results in somewhat lower levels of effectiveness and efficiency.

Moreover, a majority of Chief Information Officers [CIO] in the retail sector believes that it is cumbersome to rip and replace their existing information systems handling the routine management of the retail operations.

Advantages of Retail ERP Suite

Major advantages:

1. Retail specific components — Unlike a general ERP package, retail ERP suite offers retail centric components that are customized to meet the specific requirements of a retail organization in an effective and efficient manner. this makes a retail ERP suite much more suitable to meet the specific requirements of a retail organization.

2. Sub Segment expansion option — Within the retail sector, there is a wide variety of different segments that vary in their nature and scope of operations. A retail ERP package has provisions to meet the varying needs of the different segments within the retail sector.

3. Support for the store system — Retail ERP suite offers support for the store systems that form the pivot of a retail business. The critical functions include keeping track of the inventory, ordering and replenishment, loss prevention and task management. This makes retail ERP system suited to the specific needs of a retail organization.

Minor Advantages:

1. Configuration and scalability — A good retail ERP system allows a high degree of customization and is easily scalable to attune itself to the size of the organization and its level and scope of operations. Such configuration and scalability prove to be a boon in managing the retail operations across an enterprise. This allows a retail ERP system to grow with the organization.

2. Phased implementation support — Modern retail ERP systems provide support for phased implementation. This feature allows the software package to be implemented in a step-by-step incremental manner rather than in one go. This makes the transition to an ERP package a lot easier. This feature allows the users to acclimatize themselves to an ERP package that may initially seem complicated to use.

3. Support for advanced functionality — Modern retail ERP systems provide support for advanced functionalities that is helpful in the decision making process such as formulating pricing strategies, merchandise planning, inventory optimization and store execution. The advanced functionalities help the users formulate business strategies to introduce efficiencies in the critical business processes. The top management uses this feature to set the benchmarks and achieve the desired results.

4. Workflow automation and enterprise process management — Modern day retail ERP packages offer workflow automation and enterprise process management to make the workfiow smooth and seamless across the entire enterprise. This allows the management to monitor and keep track of the workflow while also undertaking the enterprise process management leading to the identification and removal of any inconsistencies in the business process.

5. Technology and application integration — A good retail ERP system allows technology and application integration to allow a platform independent, seamless transfer of processes across different modules running on different technologies in an enterprise wide environment that may include interaction with legacy systems and external entities such as the suppliers and the customers. Such integration provide the critical enterprise-wide view to the management.

Disadvantages of Retail ERP suite

The success of a retail ERP suite depends on the IT skills and the experience of the workforce, including training on the utilization of the information system in an effective and efficient manner.

Many companies cut costs by reducing the training budgets for the retail ERP suite. Privately owned small enterprises are often short of funds and this leads to a situation that personnel ofien operate their ERP system with inadequate education in utilizing the ERP package to its full potential. The common disadvantages of using a retail ERP package are a result of the lack of training of the employees to utilize the information system to the full extent. Other major disadvantages of using

an ERP package includes:

1. High installation costs – ERP systems are quite expensive to install and maintain.

2. Situation misfit — An ERP package may prove to be a misfit in a particular situation. companies end-up re-engineering their business process a to fit the “industry standard” prescribed by the ERP system and this frequently leads to a loss of competitive advantage. Ideally, an ERP package should suit the requirements of a company and not the other way around.

3. Limited scope for customization – The ERP software packages allow only a limited scope for customization. Some customization in the ERP package may involve making changes to the ERP software structure that are not allowed under the license agreement. This can make the situation of the ERP package user very difficult indeed.

4. Complex usage – ERP systems can be complicated to use. In order to utilize an ERP package to its full potential, the users are required to undergo considerable training which obviously costs time and money.

5. High switching costs – Once a system is established, switching costs are quite high for any one of the partners involved. This leads to reduction in flexibility and strategic control at the corporate level. The high switching costs can be attributed to the fact that installation of an ERP package involves considerable investment of both time as well as the money.

6. Need for total transparency – Resistance in sharing sensitive internal information between departments can reduce the effectiveness of the ERP package. An ERP package is designed in such a way that seamless information interchange between the different departments is an essential prerequisite to achieve its full benefits.

7. Compatibility issues – There are frequent compatibility problems with the various legacy systems of the business partners. A company may have installed the latest ERP package but it has to be compatible with the legacy systems used by its associates or business partners.

8. Overkill – An ERP system may be over-engineered relative to the actual needs of the customer. Such a situation may be called overkill since an organization may not require the functions or capabilities extended by an ERP system.

SWOT Analysis of Retail ERP

Strengths

a. Provides an enterprise wide view of the workflow

b. Allows integration with systems of associates and business partners

c. Helps In routine decision making

d. Allows streamlining of business processes

Weakness

a. Expensive to procure

b. Requires significant employee training

c. Compatibility Issues with other/legacy systems

Opportunities

a. Buoyant retail sector in the emerging global markets

b. The retail sector is overlooked by the major ERP solution providers

c. Huh efficiencies becoming critical in the retail sector due to the cut-throat competition and paper-thin margins

Threats

a. Increasing complexity of such systems

b. Divided opinion over the Return-On-Investment [ROI] from such tools

Main Components of Retail ERP System

The main components of a retail ERP system include the following:

1. Merchandise management — it constitutes the primary component of a retail ERP system that supports the merchandise management operations undertaken by the retailers. This component includes activities such as the setting up, maintenance and management of the retail outlet, keeping track of the prices of the items, inventory, and the different vendors etc. This component of the Enterprise Resource Planning [ERP) system also offers some key reporting functions as well as the allied business intelligence modules.

The merchandise management component also offers an integrated interface to tie other retail applications thereby acting as a bridge between the different retail applications supported by the retail ERP suite aimed at facilitating more efficient retail operations. A typical retail chain offers hundreds of thousands of different products to the customers.

The merchandise management component takes care of all the activities related to the management of the merchandise offered for sale at the retail store. in a nutshell, the Merchandise management component of an Enterprise Resource Planning [ERPJ package covers all the activities centered on the merchandise offered at the retail store.

2. Retail planning — This element of the ERP system allows the retailers to undertake the planning activities at a large as well as a small scale as per the need of the situation. It focuses on the different strategies to be employed in order to help the retail store in increasing the sales of the merchandise. The retail-planning component focuses on achieving the economies of scale and attaining the desired efficiencies by increasing the merchandise sales at the retail drain.

This component helps the retailers in planning the various sales and promotional events aimed at boosting or increasing the sales of the merchandise offered at the store. This way, retail planning forms an important and critical component of the retail ERP systems as it performs the critical function of offering the planning activities that may be undertaken at the micro as well as macro level to give a push to the merchandise sales at the retail store.

The retail-planning component is extensively used by the middle and the upper management in formulating favorable promotional strategies to stimulate the sales and ensure increase in inventory turns at the retail store. Hence, retail planning may be called as a critical component of the retail ERP systems.

3. Supply chain planning and execution — it provides support to the internal as well as the external supply chain process. it covers both the planning and the execution part of the supply chain management in retail. Supply chain forms the backbone of the retail operations. The supply chain represents the flow of information, finances, and materials as they move in a process from the supplier to the wholesaler to the retailer and finally to the end-user or the consumer of merchandise.

Supply chain planning and execution is an integral part of the retail ERP system. Retailers aim to take advantage of the operational synergies. To meet the requirements of such retail chains, the supply chain planning and execution component of the retail ERP systems allows the retailers to keep track of the entire supply chain beginning at the manufacturer and ending at the consumer.

It allows a retailer to keep track of all the activities and processes comprising the supply chain of the merchandise offered at the retail store. This helps retailers run their businesses in an effective and efficient manner by closely monitoring their supply drains and ensuring its management In a smooth and efficient manner to ensure profitability in the business.

4. Store operations — This element of the ERP system takes care of all the operations related to the store management function. The store operations are central to a retail chain since the retailers keep the majority of their inventory at the stars. Moreover, the store operations are unique to the retail ERP systems as the other ER P packages do not offer such a comprehensive component like the store operaflons as offered by a retail ERP system.

The store operations component includes the store specific inventory management sales audit, returns management, perishable management and the labor management The store operations component can also include the customer management and the associated promotion execution systems.

5. Corporate administration – This component aims to serve the information needs of the administration and usually includes the process management and compliance reports required by the top management for the decision-making purpose. This feature also includes other corporate financial reports such as the accounts receivables, amounts payable, general ledger and the asset management reports.

The corporate administration component may also include the corporate-level Human Resource Management [HRM) systems. Thus we can see that the corporate administration component plays a critical role in providing the necessary information to the top management to get a general idea of the health of the retail business by way of the various financial reports generated by this component provided in the retail ERP systems.

The corporate administration component can be termed as the eyes and ears of the top management in the retail business. This component makes available the necessary data required to provide an insight into the financial health of a retail business. Moreover, this component of the retail ERP system is used for generating specific compliance reports submitted to the industry watchdog or other monitoring agency that may require reporting of such data on a periodic basis. These reports not only help the management in meeting the mandatory disclosure norms but also help in the formulation of effective management.

Major Retail ERP Vendors and their Products

The global Enterprise Resource Planning [ERP) market is dominated by relatively few niche players who command the lion’s share of the market The retail E RP systems segment shows the same trends In terms of the relative market share of the major global vendors.

No.

Retail ERP Vendor Product

1

Aidata Aidata G.O.L.D.

2

GERS GERS Merchandising

3

Island Pacific Island Pacific Merchandising System (IPMS)

4

JDA Software Portfolio Merchandise Management (PMM)

5

Jesta I.S. Vision Merchandise Suite

6

Microsoft Dynamics Microsoft Dynamics NAV “Navision”

7

NSB Group Connected Retailer Merchandising

8

Oracle Oracle Retail Merchandising System (ORMS)

9

Retalixe  Retalixe HQ

10

The Sage Group Sage Pro ERP

11

SAP SAP for Retail

12

Tomax Tomax Merchandise Management

An online course approved by the Texas Education Agency (TEA) will meet the requirement that a Ds education course to their teens, provided that the parent is qualified and meets the requirements. 

Business Implications

The Business implications of a retail ERP system are immense indeed. in the contemporary business environment where liberalization, privatization and globalization are the order of the day, most of the retail businesses around the world operate under the fiercely competitive market conditions.

Such competition has led to the paper-thin margins in this sector. In order to remain competitive, the retail organizations surviving on thin margins cannot afford the luxuries of systemic inefficiencies or delayed decision-making. Both the activities, be it increasing efficiencies in the business process or the ability to take prudent decisions quickly requires an inside out awareness of the business process. A retailer ought to know the ‘complete picture? that can indicate the true state of a retail business.

A retail organization may comprise of a small chain of retail stores confined to a small town or city or may include a mammoth organization having its presence around the world in the form of thousands of stores scattered across tie different parts of the world. Walmart is one example of a retail business with its operations spanning across different continents around the world. Management of such a distributed network of retail chains is a Herculean task indeed. The retail Enterprise Resource Planning (ERP) packages help the retailers in better management of their enterprise wide operations spanning the entire globe.

The retail ERP systems provide one-stop solution for most retail information processing challenges by providing a comprehensive solution to managing a complex retail business. An Enterprise Resource Planning [ERP) system helps the retailers manage their businesses in an effective and efficient manner by providing integrated and consistent information flow. It makes the task of keeping track of all the transactions so much easier. A retail ERP system allows automatic recording of the transactions in real-time environment. They have become indispensable tools to survive and increase profitability in the retail sector for large retail organizations.

10 Cloud Security Best Practices

10 Cloud Security Best Practices

cloud-security-best-practices

Best Practices

As more companies make the jump to the cloud, the importance of building strong cloud security operations grows. Just as with any industry, implementing the technologies with industry best practices will help yield the best result in the long term. CRN asked security experts what they see as the most important best practices in the growing market for cloud security offerings. Take a look at what they had to say.

1. Deploy Identity And Access Management

No matter what cloud security measures are in place, without identity and access management solutions a company has a huge hole in its security portfolio, experts agreed. Chenxi Wang, vice president of cloud security and strategy at CipherCloud, said these solutions have to be integrated with the organization and continuously updated as employee turnover occurs. That integration is often missed, she said, opening the door to insider threats.

Bill Lucchini, senior vice president and general manager of Sophos Cloud, agreed.

“You want to look at employee controls for sure,” Lucchini said. “A lot of these breaches happen because of one disgruntled employee or one not-careful employee.”

2. Classify Data

Solution providers need to help their customers sort out data classification levels and, from there, determine what level of data protection they will employ in the cloud, CipherCloud’s Wang said. Companies are just beginning to recognize the importance of this practice, Wang said, and are starting to make strategic decisions about their data protection policies.

“The cloud doesn’t know. The cloud operations won’t know your business processes, your priorities, so it doesn’t know what’s important and what’s not,” Wang said. “You have to be the one who specifies the data at criticality levels.”

3. Create Visibility For Policy Control 

To have strong policy control in the cloud, companies need to make sure they have complete visibility, CipherCloud’s Wang said. As a best practice, that includes knowing what assets are in the cloud, what data is being sent to which application and who is using what data. Having that visibility helps prevent “blind spots” with threat detection as well as preventing insider threats, she said, adding that there are plenty of tools on the market to help a company gain that visibility into its cloud operations.

4. Provide Regular Auditing

While everyone tries to do their best with security, it’s hard not to miss the forest for the trees, Sophos’ Lucchini said. For that reason, Lucchini recommended solution providers conduct regular cloud security audits as a best practice to find security holes for their customers.

“It’s really enlightening,” Lucchini said. “It’s worth doing periodically … just getting an outside expert is a big help.”

5. Shared Responsibility Model

Many security experts spoke of the shared responsibility model of the cloud, where clients and cloud providers have responsibilities for different aspects of security. As he sees it, Dave Abramowitz, Trend Micro technical adviser, said cloud providers are responsible for the security of the hardware and physical infrastructure, while customers are responsible for securing the OS and applications. Solution providers have to educate customers on where the cloud provider’s security responsibilities end and plug any security holes left uncovered. For solution providers, making sure clients are on board and understand that shared responsibility model is a best practice, he said.

6. Advocate For Stronger Password Protection 

password

Having strong passwords is an important best practice in general, but especially important in the cloud, Trustwave Vice President of Managed Security Testing Charles Henderson said. On a basic level, that means training employees to choose strong passwords, with more than 10 characters, multiple words and symbols, he said. On top of that, Henderson recommended clients implement two-factor authentication solutions to make it more difficult for attackers to gain control of an account.

7. Choose A Cloud Vendor With A Solid Track Record

While it might seem simple, an important best practice of cloud security is choosing a vendor with a solid track record for security, said Sam Heard, president of Data Integrity Services, a solution provider based in Lakeland, Fla. So far, few of the mega data breaches have targeted cloud providers, but it is still vitally important to fully evaluate what they bring to the table and how those offerings have performed in the past for other clients, he said.

8. Secondary Internet Pipe

For clients with mission-critical applications in the cloud, Data Integrity Services’ Heard said he recommends deploying dual Internet pipes as a best practice, one for running applications in the cloud and the other for regular Internet traffic. While much more expensive, Heard said utilizing two different Internet connections cuts any slowdown in accessing applications in the cloud and helps prevent critical application downtime in the case of a WAN issue.

9. Perform Penetration Tests On Ecosystem Partners

One emerging best practice is doing penetration tests on any partners clients work with, including contractors, manufacturers and anyone who the company communicates with and does business with on a regular basis, said Trend Micro’s Abramowitz. A company can do everything to protect itself, he said, but if its ecosystem partners aren’t doing the same there is an opening for an attack. While not every company does this today, Abramowitz said he sees it happening more and more in the market.

10. Don’t Forget To Monitor Threat Detection Technology

While having threat detection technologies in place is an obvious security best practice, Trustwave’s Henderson said companies need to make sure they have enough resources and expertise in monitoring traffic to back up the technologies. The faster a breach is detected, the faster it can be contained. However, Henderson said most organizations today are falling short in implementing threat detection best practices, citing data from the 2015 Trustwave Global Security report, which found an average 14.5 days from intrusion to containment when detected internally and an average 154 days when discovered by an external party.

Endpoint protection: How to select virtualization security tools

Endpoint protection: How to select virtualization security tools

hypervisor-based-endpoint-security

Most virtualization security tools still follow dedicated agent models, but some technologies are starting to offload resources to a dedicated VM and leverage hypervisor APIs.

As more network infrastructure becomes virtualized in both private and public cloud environments, how is traditional endpoint security technology evolving to adapt?

Several virtualization security tools exist for host-based security monitoring and protection in virtual environments. The first endpoint security approach is fairly traditional; security teams often use either a standalone antivirus or host-based intrusion detection system/intrusion prevention system (HIDS/HIPS) agent in the virtual machine (VM), or antivirus and host-based monitoring that’s been adapted for virtual infrastructure with hypervisor APIs.

Specialized tools are not as common for virtual environments. But some endpoint security technologies, such as Bit9 + Carbon Black, Mandiant (FireEye) and Guidance Software’s EnCase platform, offer whitelisting and file integrity monitoring agents or endpoint forensics agents. Other endpoint protection tools, such as Bromium and Invincea, leverage virtualization capabilities, although this type of software is often found on traditional endpoints.

In virtual environments, where pooled resources are the norm, any virtualization security tools that drain system resources on a per-VM basis should be regarded as a potential risk to the whole virtualization ecosystem. In fact, much of the antivirus industry still has to adapt to accommodate VMs and the performance ramifications of virtualization’s shared resource compute model. Examples of virtualization-friendly antivirus include Kaspersky Security for Virtualization, Bitdefender Security for Virtualized Environments, and Symantec Endpoint Protection. These antivirus tools have been optimized for performance and scheduling, offering more lightweight deployment options than usually found on traditional endpoints.

New architectures are emerging that tie an HIDS/HIPS VM to the hypervisor kernel, passing all traffic and activity through the VM for “cleaning.” VMware’s vShield Endpoint, a commercial product, is primarily an integrated interface and architecture that allows antimalware products like Trend Micro’s Deep Security, Sophos Antivirus for vShield, and Intel Security’s MOVE Antivirus (McAfee Management for Optimized Virtual Environment) to operate efficiently within the hypervisor. The architecture is very innovative — a single VM is designated as the “antivirus/HIDS VM,” and a low-level bus in the hypervisor kernel sends all traffic and data to be evaluated within that VM only. This saves a significant amount of overhead, because none of the production VMs require a heavy agent.

Key criteria for evaluation

The most important criteria for teams evaluating host-based security for VMs are compatibility, performance specifications, and scalability for agents and tools in the virtual environment. Security and operations teams should also evaluate how the tools will be integrated (or if they can be). Be sure to investigate whether the host-based security tools are compatible with virtualization management consoles like vCenter (VMware), System Center Virtual Machine Manager (SCVMM for Hyper-V) or XenCenter for XenServer. This type of integration is not that common. Most HIDS/HIPS and antimalware agents have a separate console already, but any integration capabilities should be thoroughly evaluated, especially if there’s an operational need for consolidated management. Simple architectural considerations also apply. For example, will putting the HIDS/HIPS management console in a VM on the same hypervisor platform be a better use of resources? This may be the case in a cloud environment, especially if it’s hosted elsewhere.

As an alternative, tools like Bromium and Invincea are two host-based security technologies that use virtualization to defeat attacks. Bromium, founded by Xen creator Simon Crosby, is a hybrid Type-I hypervisor that uses the Intel VT-X chipset virtualization to create a thin hypervisor layer under the actual OS (Windows, for example). Any malware or attacks on the system are intercepted by a local policy engine that can use the hardware layer for enforcement, almost emulating the idea of security researcher Joanna Rutkowska’s Blue Pill rootkit in some ways. Invincea, on the other hand, leverages application virtualization within the OS with a policy “wrapper” around certain high-risk applications like browsers and email clients.

What’s ahead for virtualization in endpoint security?

The market for virtualization endpoint security is evolving rapidly. Most tools follow the traditional model, using a dedicated agent. Some technologies are starting to offload resources to a dedicated VM and leverage hypervisor APIs to manage detection and prevention tasks. Still more endpoint security software makes use of the virtualization capabilities themselves, preventing attacks from successfully interacting with the hardware, memory or OS.

Endpoint protection: How to select virtualization security tools

Deep Dive into VMware Fault Tolerance

Deep Dive into VMware Fault Tolerance

vmware

Server virtualization has become very popular and grown very fast in last few years and enterprise started to use virtualization more and more to gain the benefits provided by virtualization such as:

1: Higher server consolidation ratios.

2: Better resource utilization (Using DRS).

3: Lower power consumption (Leveraging DPM).

4: Increased workload mobility via technologies such as vMotion and svMotion.

Features such as Distributed Resource Scheduler (DRS) and Distributed Power Management (DPM) are giving organizations a flexibility to go for a even higher consolidation ration than ever before. DRS is now a very trusted feature and almost all organizations are happy to use it in fully automated mode which was not the case earlier when DRS was introduced by VMware.

DRS and DPM complement the hardware evolution trends by applying dynamic resource allocation to lower the capital and operating costs in a datacenter.

However increased consolidation ration also brought some risks with it. As more business-critical workloads are deployed in virtual machines, a catastrophic failure of a single physical server might lead to an interruption of a large number of services.

VMware understood this and thus tried to address the availability issues for mission critical workloads by introducing features such as VMware HA, Site Recovery Manager (SRM) and VMware Data Protection (VDP) over the time.

These solution works very smartly by disassociating the virtual machine state including all business logic from the underlying hardware and applying data protection, disaster recovery, and high availability services to virtual machines in a hardware-independent fashion.

For virtual machines that can tolerate brief interruptions of service and data loss for in-progress transactions, existing solutions such as VMware HA supply adequate protection. However, for the most business-critical and mission-critical workloads even a brief interruption of service or loss of state is unacceptable.

So for the workloads that cant suffer service discontinuation even for a single second VMware introduced a feature called Fault Tolerance. Before diving into FT lets see how super high availability was achieved in older days when there was no virtualization.

Fault Tolerance in the Physical World

All fault tolerance solutions rely on redundancy. For example, many early fault tolerant systems were based on redundant hardware, hardware failure detection, and failing over from compromised to properly operating hardware components.

In older days high availability solutions was achieved via 2 ways:

a) Using fault tolerant servers based on proprietary hardware.

b) Using software clustering.

Fault Tolerant Servers

Fault tolerant servers generally rely on proprietary hardware. These servers provide CPU and component redundancy within a single enclosure, but they cannot protect against larger-scale outages such as campus wide power failures, campus wide connectivity issues, and loss of network or storage connectivity.

In addition, although failover is seamless, re-establishing fault tolerance after an incident might be a lengthy process potentially involving on-site vendor visits and purchasing custom replacement components. For physical systems, fault tolerant servers provide the highest SLAs at the highest cost.

Software Clustering

Software clustering generally requires a standby server with a configuration identical to that of the active server. The standby must have a second copy of all system and application software, potentially doubling licensing costs. A failure causes a short interruption of service that disrupts ongoing transactions while control is transferred to the standby. Application software must be made aware of clustering to limit the interruption of service. However, the potential for data loss or corruption during a crash is not fully eliminated.

An example of such a system is an application built around Microsoft Cluster Service (MSCS).

VMware Fault Tolerance

VMware FT address the above issues by leveraging encapsulation properties of virtualization by building high availability directly into the x86 hypervisor in order to deliver hardware style fault tolerance to virtual machines.

It requires neither custom hardware nor custom software. Guest operating systems and applications do not require modifications or reconfiguration. In fact, they remain unaware of the protection transparently delivered by the ESXi hypervisor at the x86 architecture level.

FT relies on VMware vLockstep technology. When FT is enabled on a VM, a secondary copy of the VM is spawned immediately. The secondary VM runs in virtual lockstep with the primary virtual machine. The secondary VM resides on a different host and executes exactly the same sequence of virtual (guest) instructions as the primary virtual machine. The secondary observes the same inputs as the primary and is ready to take over at any time without any data loss or interruption of service should the primary fail.

FT delivers continuous availability in the presence of even the most severe failures such as unexpected host shutdowns and loss of network or power in the entire rack of servers. It preserves ongoing transactions without any state loss by providing architectural guarantees for CPU, memory, and I/O activity. The two key technologies employed by FT are vLockstep and Transparent Failover.

vLockstep Technology

vLockstep technology ensures that primary and secondary VM’s are identical at any point in the execution of instructions running in the virtual machine. vLockstep accomplishes this by having the primary and the secondary execute identical sequences of x86 instructions. The primary captures all non-determinism from within the processor as well as from virtual I/O devices.

Examples of non-determinism include events received from virtual network interface cards, network packets destined for the primary virtual machine, user inputs, and timer events.

The captured non-determinism is sent across a logging network to the secondary. The secondary virtual machine uses the logs received over the logging network to replay the non-determinism in a manner identical to the actions of the primary. The secondary thus executes the same series of instructions as the primary.

vmware-vlockstep

In my initial days as VMware Admin when I learned about VMware FT, always there was one question in my mind. If both primary and secondary VM have exactly the same configuration including networking stack, why don’t we get an IP address conflict on network. This I asked 2 times in my interview and never got the answer. Then when I was reading vSphere Design book, I came to know across the reason behind it. Same is explained as below.

Both the primary and secondary virtual machines execute the same instruction sequence and both initiate I/O operations. The difference between execution of instructions lies in the way how output is treated.

The output of the primary always takes effect: disk writes are committed to disk and network packets are transmitted, for example. All output of the secondary is suppressed by the hypervisor. The external world cannot detect the existence of the secondary and, at all times, treats a fault tolerant virtual machine as single unit executing the workload.

Transparent Failover

Because of the way vLockstep works, the existence of the primary and secondary VM is hidden from the outside world, which observes only a single virtual machine image executing a workload. VMware Fault Tolerance must be able to detect hardware failures rapidly when they occur on the physical machine running either the primary or the secondary VM and respond appropriately.

The hypervisors on the two physical machines where primary and secondary are running establishes a system of heartbeat signals and mutual monitoring when FT is enabled. From that point on, a failure of either physical machine is noticed by the other in a timely fashion. Should a failure happen on either physical machine, the other physical machine can take over and continue running the protected virtual machine seamlessly via transparent failover.

Transparent failover can be explained using following example:

vmware-transparent-failover

Lets suppose the physical machine running the primary VM has failed., as shown in above figure. The hypervisor on the secondary physical machine immediately notices the failure. The secondary hypervisor then disengages vLockstep.

Hypervisor running on secondary physical machine has full information on pending I/O operations from the failed primary virtual machine, and it commits all pending I/O. The secondary VM then becomes the new primary. This is illustrated in step-2 of above figure.

This terminates all previous dependencies on the failed primary and after going live, the new primary starts accepting network input directly from physical NICs and starts committing disk writes. The VMkernel unblocks the suppressed instruction capabilities in secondary VM. There is zero state loss and no disruption of service, and the failover is automatic.

After the initial failover, a new secondary VM is spawned automatically by VMware HA. This is shown in step 3 of Figure. The new primary hypervisor establishes vLockstep with the new secondary, thus re-enabling redundancy. From this point onward, the virtual machine is protected once more against future failures.

The entire process is transparent (zero state loss and no disruption of service) and fully automated. FT deals similarly with the failure of the host executing the secondary virtual machine. The primary hypervisor notices the failure of the secondary and disengages vLockstep. The services provided by the primary virtual machine continue uninterrupted. A new secondary is created and again vLockstep is established between the primary and secondary VM.

Does FT supports failback?

So we have seen failover is transparent with FT and without any disruption of services. What about failback? What happens when primary server running the primary VM comes back online after recovering from failure. What happens now? Will the original primary (which got failed) becomes secondary or will it become primary again and force the new primary (which was secondary before the failure) to become secondary.

This was some questions which kept me waiting for a long time before I got a correct explanation. I discussed this with many of the colleagues of mine and each one have their own version of answer.

So answers for above question is “NO, FT doesn’t supports failback“. Even after the physical server which comes online after failure, it is not going to disrupt the current pair of primary-secondary FT VM. The original primary VM which gone down due to host failure never comes back online again. All the memory pointers of failed primary VM is deleted. There can’t be more than 2 VM’s at any given time in a FT pair.

Do VMware FT protects against OS failures?

This is also one question of great interest. We have seen how FT protects mission critical workloads against host failures. But what about OS failures? Does FT provides any protection against the failures which happens inside the guest os running in virtual machines.

Answer to this question is also a “NO“. FT can’t protect against the OS failures. Since primary and secondary VM’s are in vLockstep and maintains same consistent state, so a failure like BSOD in primary or a corrupt dll will also be replicated to secondary. So if a primary fault tolerant VM goes down due to BSOD, secondary will also suffers BSOD.

Till now there is no way FT protect against this. May be in future VMware make FT more intelligent to address this kind of failures as well.

As of now VMware has features like VM monitoring and App HA to address these kind of issues but it requires a bit of downtime and services are interrupted till the time failure has happened and recovered.

Top 5 User VDI questions

Top 5 User VDI questions

desktop-virtualization

Once you implement virtual desktops, you’re bound to get more than a few service desk requests from frustrated or confused users.

1. Why can’t I connect?

One of the most common VDI-related help desk calls is from users who are having trouble connecting to a virtual desktop session. If properly constructed, VDI deployments tend to be reliable, but there are a number of different reasons users might be unable to establish a connection.

A lot of VDI questions from users who can’t connect are from those who are working off site. Even in this day and age, there are users who don’t know how to connect to a Wi-Fi hotspot. Plus, some hotels have firewall configurations that block access to virtual desktops unless the user pays for a premium Internet access package. More often than not, solving a desktop connection problem involves making the Internet connection or, in the office, ensuring your network is up to speed for VDI.

2. Why can’t I print?

Printing from a virtual desktop is another source of frustration for VDI users. For example, a friend of mine works from home and often calls me because she’s having trouble printing.

When my friend “can’t print,” it’s because the print jobs are actually being sent to the corporate headquarters over a thousand miles away, instead of being directed to the printer three feet away from her. To keep this from happening in your environment, make sure printer redirection is in place.

3. How can I connect from my personal device?

VDI environments are actually ideally suited to bring your own device(BYOD), because virtual desktops can provide the illusion of Windows desktops and applications running on a device that is not natively Windows compatible.

You will normally have coworkers who will be your prospects, etc. However, your help desk can expect lots of calls from VDI users who want to access their virtual desktops from personal mobile devices. It’s the “Monkey See Monkey Do” factor. In other words, if one person gets his tablet connected to his virtual desktop, that user’s friends will want the same thing. Once they do, IT will be responsible for applying security policies, training users on how to establish connections to the desktop, etc.

Once you start supporting VDI on mobile devices, you can expect calls from those same users getting new devices. Mobile devices have a relatively short lifespan, and they tend to get lost, stolen or damaged. Constant device turnover will lead to a steady stream of help desk calls as you’ll have to reconfigure endpoints and re-provision virtual desktops or applications to the new devices.

4. Where did my stuff go?

Although some VDI implementations create dedicated personal desktops that belong to specific users, most assign a user’s connection to a random virtual desktop within a virtual desktop pool. This virtual desktop is reset to a pristine condition at the end of each session, so that users can always be guaranteed a healthy virtual desktop.

Resetting virtual desktops at the end of each session goes a long way toward preserving the overall integrity of the VDI environment, but it can be a source of confusion for end users. If a user changes the desktop background or installs an application, those changes might be undone when he logs out. At the next logon, they may call the help desk wondering where their customizations went.

To preempt this issue, determine whether you want to use persistent or nonpersistent desktops when you implement VDI. If you decide that desktops will be refreshed after logoff, make sure users know that they can’t personalize their virtual desktop.

5. Why isn’t my password working?

Passwords can be another source of confusion.

Organizations often use technologies such as Exchange ActiveSync to enforce device security (including passwords). The nice thing about ActiveSync policies is that they work on a variety of platforms and can be used without joining the device to a domain. Conversely, the virtual desktops themselves are usually domain joined and therefore authenticate into the Active Directory. The result is that the user has two separate passwords and uses two different authentication mechanisms.

The fun begins when the user is required to change a password. If the user is required to change his VDI password, he might assume that the change applies to the password on his home PC or mobile device as well. Likewise, if the device password is changed, then the user might have trouble logging into his virtual desktop because the Active Directory password remains unchanged.

Windows Server 2003 Migration Nightmares

Windows Server 2003 Migration Nightmares

windows-2003-end-of-life

Microsoft partners are reporting a number of different Windows Server 2003 migration hiccups as the cutoff date nears. On July 14, Microsoft will stop delivering security and software updates for Windows Server 2003, creating a time bomb that will trigger a raft of issues from security, reliability to compliance issues.

While many partners say the migration has gone smoothly, others are stealing a line from John Steinbeck, saying, “The best-laid plans of mice and men often go awry.”

Upgrade Path For Exchange 2003?

exchange-2003-eol

Support officially ran out for Exchange 2003 last year but, just like Windows XP, the deadline didn’t spark a mass exodus off Exchange 2003. Now partners that thought their customers’ Exchange 2003 problems would be solved by updating from Server 2003 to Server 2012 RD are in for a rude awakening.

While there is an upgrade path for Windows Server 2003, there is not a similar path for Exchange 2003. That means anyone considering an in-place upgrade from Server 2003 to Server 2012 R2 will have to have an alternate strategy for Exchange 2003.

To move to Exchange 2013 (the most recent version) from Exchange 2003 requires a dual-hop migration. That means you’ll have to migrate to Exchange 2010 first.

Killing Zombie Domain Controllers

Systems administrators are reporting a particularly nasty trend related to retiring domain controllers. Domain controllers are servers that authenticate users, store user account information and enforce security policies for a Windows domain and act as gateways to other Windows domain resources.

Admins report that old circa 2003 domain controllers often leave behind remnant code that can wreak havoc within a fresh install of Active Directory and DNS. The fix? Take the time ahead of a migration and find the complete list of active and dead domain controllers and match this list with what you actually still have running. Next, address discrepancies.

Configure Windows Server 2012 R2 DNS With TLC

dns-query-succes

One surefire way to mess up an otherwise perfect Server 2003 migration is by not paying attention to the last and most important steps — configuring 2012 R2 for DNS. Microsoft partner Derrick Wlodarz, founder of Park Ridge, Ill., technology consulting and service company FireLogic, wrote in a recent company blog post: “DNS by and far is one of the most misconfigured, maligned, and misunderstood entities that make up a Windows network.” Couple that with the pressure of a looming migration deadline and it could spell double trouble.

Wlodarz urges when configuring your DNS:

— Your Windows Server 2012 R2 server should always be the primary DNS record.

— Consider adding the IP of your firewall as a secondary internal DNS server.

— Never use public DNS servers in client DHCP broadcasting or on the IP settings of Windows Server 2012 R2.

— Don’t disable IPv6 in nearly all cases.

Desktop And Laptop Client Failures

So you successfully migrated from Server 2003 to Server 2012 R2. Congratulations, but before you pop the cork on your post-migration celebration you’ll want to check the fleet of PCs and devices connected to your new server.

There are a number of Microsoft client compatibilities issues that you’ll want to double check, according to Microsoft TechNet. For example, Outlook 2003 isn’t supported for use with Office 365. In fact, Microsoft recommends using Outlook 2010 or Outlook 2013 for Office 365 connectivity.

Telltale signs someone is having Microsoft client issues related to out-of-date Office software include lost email messages between Exchange 2003 and Office 365; directory synchronization hiccups; calendar connectivity issues that prevent “seeing” co-worker free/busy information; and encountering mailbox migration problems when using Exchange Migration Wizard.

Application Bombs

Other application-related time bombs come from well-publicized issues surrounding third-party app compatibility. Check first before you find out the hard way that line-of-business applications are incompatible with Windows Server 2012. Many firms are learning that mission-critical apps have either gone out of business or their apps are 32-bit and are incompatible with the 64-bit Windows Server 2012 operating system.

Beyond line-of-business apps, admins also need to consider less-than-obvious compatibility issues, namely around Web apps and agents, such as backup and recovery and antivirus apps.

Do Not Underestimate The Complexity Of Your Migration

Microsoft provides free discovery, migration and configuration tools including Microsoft Assessment and Planning Toolkit, Azure VM Readiness Assessment and the Windows Server Migration Tool. Now here is the catch.

According to Microsoft partners, no matter how handy the tool and straightforward the migration may seem, never underestimate the complexity of migrating. Accounting for every application, workload and embedded system is yeoman’s work and not to be undervalued.

“In my experience as a consultant, most businesses are caught off guard by what is found during the discovery phase in a re-platforming project — often very different from what they have documented and inventoried,” wrote Joe Terrell, an executive with EMC’s cloud portfolio services, in a company blog regarding Server 2003 migrations. “Missing a dependency can be catastrophic during a technical project, whether it’s a migration or a re-platforming initiative.”

Does Your Culture Encourage Teamwork

Does Your Culture Encourage Teamwork

business-teamwork

Great teams don’t happen by accident. There are several components that you need to have in place for a great team to emerge. You need to have a vision for everyone to rally behind. You need to have a leader who fosters and encourages teamwork and collaboration. And crucially, you need a work environment that is conducive to working together toward a shared goal.

This last point is one that a good number of companies are starting to pay attention to, and to rethink conventions. Consider the “open office” phenomenon. Consider the number of companies that believe you have to have a big, fun campus like Google’s in order to have a truly coherent and effective team.

Your office environment can certainly have an impact on team dynamics, but it’s not the most important thing. The more important thing is your company’s culture—something that’s reflected in how you lead, how you manage, how you arrange your office, how you communicate with your team, and more.

The question is, how do you develop an organizational culture that sparks true teamwork and camaraderie?

  1. The first thing to remember is that great teamwork begins with, well, a great team. What I mean by that is that you lay the foundations for great teamwork simply by assembling a team of individuals who fit in with your company culture and values, and who bring diverse and complimentary skills to the table.
  2. Communication is, as ever, the key. Quiet teams are usually not very productive or unified teams. That doesn’t mean you have to plan a corporate getaway or a team-building activity every day of the week, but do plan 10 or 15 minutes each day just to get everyone together to discuss the team’s goals and progress.
  3. Implicit in the previous step is making sure your employees all know what the team goals are. Make sure there is a roadmap of where things are headed, and that you communicate how everyone fits in on that roadmap.
  4. Set clear, measurable goals both for the team overall and for individuals. Make those goals challenging, but attainable—and make sure you offer recognition and affirmation when goals are met!
  5. Try to avoid anything that stands in the way of effective, two-way dialogue. Everyone on the team should feel comfortable offering thoughts, opinions, and critiques. If there is fear about speaking up or speaking out, you have a problem you need to address!
  6. Work to correct performance issues promptly and privately. A team member who clashes with others or doesn’t understand the team goals may just need a little extra counseling or communication.
  7. Finally, remember that teamwork begins with the leader of the company. Lead by example. Communicate openly, be accepting of feedback, cherish other voices and opinions, and solicit help when you need it!

Building a great team is something that will require strategy—and real leadership—but it can yield remarkable results.

Does Your Culture Encourage Teamwork?

Improving Security Technologies and Processes

Improving Security Technologies and Processes

security-technologies-processes

Today’s security operations center (SOC) monitors security alerts and alarms from security products and threats indicated by a security information and event management system (SIEM). These alerts and threats turn into cases that funnel into a workflow system in use by the security team. After initial review to determine if the alert is a false positive, additional data is gathered so that analysis can take place. To put it another way, the security team tries to build a story around the valid alert.

Once the story is created, a different team might be assigned to contain the incident and that same team (or another) would be assigned to restore systems to a pre-infection state. This closely resembles today’s Detection-Analysis-Containment-Restoration security process.

While there has been some refinement of the security tools that are used at the detection stage, “…Most of the security products available on the market are just a half-step better than old antivirus products.” The HIMSS organization surveyed nearly 300 healthcare organizations and the list of technologies healthcare providers had most of us could have recited from memory. AV, firewalls, log management, vulnerability management, IDS, access control lists, mobile device management and user access controls. A large majority of these security teams know they can’t stop current attacks (22% had confidence they could) and 81% said some new technology was needed. Security people are aware that processes built around those basic technology solutions listed above, have remained virtually unchanged for the last two decades.
Zeroing in on the process, it’s not hard to see what’s broken – the detection and analysis portion, or, what I call knowledge-building portion of the process. Today attackers run malware through all the latest detection techniques and anti-virus software prior to deployment to make it as invisible as possible. It may also be coded to evade malware sandbox detections. Once inside the network, it inherits the identity of the system’s user and that person’s access level. The attacker’s activity simply looks like normal IT activity making all the technologies listed above blind to the attacker. Detection never takes place and the security process never kicks off. I can hear some say, “What about encryption doesn’t that help?” Having valid credentials gets the attacker around this little problem. If the user is able to do their work, their access level allows the data to be decrypted.

The other part of the knowledge building process is Analysis. If you were lucky enough to have seen some evidence of malware on a system, it gets cleaned up but there is little to suggest which systems were infected and what credentials were compromised. If the data is valuable enough, the attacker can start over with the same or a different set of valid credentials.

The process and technology objective should move the security team’s focus away from malware and to the credentials that enable it. To do this the system should opt to learn what are the normal credential behaviors and access characteristics for a user and the user’s peer groups so what is anomalous can be surfaced and scored.

Security alerts should be automatically attributed to the user credential involved and these alerts and the anomalous behaviors are placed on a time line. This would create an attack chain that shows the intersection of credential use, assets touched, and security alerts. Hence, the entire attack chain would be automatically created.

Detection and analysis would be a single “knowledge-building” function. User behavior analytics that makes the stealthiest attacks should be made visible and analysis should be created as the attack happens.

Ways Security Can Cost Your Business

Ways Security Can Cost Your Business

security-cost

Do you know how secure your organization is—and what the costs of a data breach can be?

If you’re like most organizations, you probably have a pretty good sense of the potential fallout, particularly in light of recent, high-profile breaches. It’s no wonder that 64% of security decision makers say that adopting a data-centric approach to security is a high priority over the next 12 months.

The fact is, data breaches are growing in number, and the financial cost is growing too. The average cost of a data breach has nearly doubled in the past five years, from $6.46 million in 2010 to $12.9 million today.

But the costs aren’t just monetary. What about damage to your reputation? Customers and users place an enormous amount of trust in the companies with whom they do business. A single breach can damage that trust forever. What about intellectual property that if leaked, could sound the death knell for any organization? There’s no recovery.

Today, security isn’t just about basic monitoring services. Companies have far more to consider than they once did, particularly when you consider the rise of new technologies and business usage scenarios, like Cloud and BYOD. Instead, it’s a holistic approach to protection, prevention, and response—and it needs to encompass all aspects of technology.

Here’s what you need to consider when implementing, updating, and enforcing your security policy. Some of the items might surprise you.

“Many organizations, despite having implemented some of the more standard countermeasures (i.e., firewalls, antivirus, IDs) still do not have visibility across their environment to understand what is happening at any given time.” — IDC

External Threats

threat

Welcome to the digital age: the sheer number of external threats is growing, and there’s absolutely nothing that we can do about it—other than maintaining constant vigilance through a security policy that is constantly updated and enforced.

The speed at which threats are increasing is exponential. For instance, there are millions of malware variations that enterprises must defend against, but it’s difficult for signature-based malware to keep up. There are more distributed denial-of-services (DDoS) attacks than ever before, and they vary widely; they can be highly targeted or generic, long in duration or short. And they mutate; there’s a new breed of DDoS attacks that use Web servers as payload carrying bots, which makes them even more deadly because of exponential performance increases. And then there are application attacks—25% of all DDoS attacks—which are often targeted at financial systems, which can bring a company to its knees.

What’s even more problematic is that most organizations have already been breached—they just don’t know about it. Malicious operators are like sharks constantly nibbling at the cage. They’re always there. They’re often already inside. And it can be just a matter of time before they strike.

100% of business networks analyzed by Cisco have traffic going to websites that host malware

Internal Threats

External threats are real and dangerous. But internal threats can be just as common—and just as damaging. And we’re not just talking about the disgruntled employee who leaks sensitive data right after they’re fired, although this is a phenomenon that does exist (often with dire consequences).

36-of-breaches-are-employees-misuse

Instead, internal threats are often inadvertent, stemming from a lack of oversight. Ask yourself the following:

• Do your employees download whatever software they want on their work computers?

• Can people access sensitive corporate data on their personal devices?

• Do workers conduct business using their smart phones?

93-of-us-organizations-insider-threats-vulnerable

In other words—does your organization have policies for what employees can do with company information, how they can access it, and what applications they can download? And is managing BYOD scenarios a key part of that policy?

If the answer is no, you have a security problem. More likely, you do have a security policy in place—but if you’re not enforcing them, it’s as good as not having one at all. If this is you, you’re not alone. According to a recent survey by DataMotion, 44% of respondents only moderately enforce their internal security policies.

But internal threats are just as real as external threats—and companies that don’t get a handle on them are at risk.

Untrained Staff

security-awareness

When it comes to security, one key oversight is not training people. It’s imperative that employees know what your security policies are—all the way from what devices they can use to what applications they can download. Educate. Evangelize. Enforce. Let them know.

The rising problem of Shadow IT

More and more organizations are struggling with Shadow IT, the use of hardware or software that is not supported or authorized by an organization’s IT department. Shadow IT can range from developers using various Software-as-a-Service platforms to employees storing corporate data in cloud storage solutions like Dropbox or Google Drive. These solutions seem innocuous to most people—which is why employees need to receive comprehensive training about what is a security risk and what isn’t.

53% of federal government IT security decision makers state that careless and untrained insiders pose the greatest IT security threat to their agencies.

Governmental Compliance

Would you pass an audit for governmental compliance with security policies?

The surprising news is that a large number of companies aren’t sure they would. In fact, a survey demonstrated that nearly 60% of 780 IT and business-decision makers in North America are only moderately confident that their organizations would be compliant with requirements for protecting across a wide variety of industries.10  Even more shocking? Global PCI compliance is only at 20%.

government-top-breaches

Complicating matters is the fact that many organizations don’t even know that governmental compliance regulations apply to them.

Take healthcare for instance. There are many companies that work downstream from healthcare organizations. If they’re handling Protected Health Information, they must be HIPAA-compliant. The law handles this by requiring that the hiring healthcare organization have them sign paperwork stating that they are a Business Associate. Yet according to a DataMotion survey, 40.5% of respondents had either not been asked to sign a Business Associate agreement, or weren’t sure if they had signed one.

Both the Business Associate and the healthcare organization are at risk for non-compliance.

But even more worrisome? How simple it is for confidential patient healthcare information to be compromised when a company is not compliant.

Choosing the Right Partners

partners

More and more organizations are choosing to outsource their security operations—no surprise given the fact that IT outsourcing is a growing trend in and of itself. But when it comes to outsourcing security, it’s truly buyer beware. The first step? You need to understand exactly what it is that you need to protect—generally, devices, network, applications, and data—and then determine what components of these you’re outsourcing. The second step is to choose the right partner or partners for your specific needs. And keep in mind that the more you can consolidate vendors, the more efficient your strategy will be.

Balancing performance and cost

Make no mistake: security is expensive. Not having security is even more expensive. But part of choosing the right partner comes down to understanding the balance between performance and cost. The simple fact is that you will never be 100% secure. Choose a vendor who can help you make the right decisions around balancing performance, effectiveness, and cost.

Physical Security

physical-security

Physical security is the protection of people, hardware, programs, networks, and data from any damage that might occur. In other words, it’s having a data center that is protected from fire, natural disasters, burglary, terrorism, theft … the list goes on and on. If your physical system isn’t secure, nothing else matters. Yet physical security is one of the most overlooked aspects of a security strategy.

The physical management of data centers includes all aspects of the physical security, including security policies and procedures, security officer staffing, access control systems, video surveillance systems, standards compliance, and physical security designs and improvements within the data centers. Make sure the data center you choose complies with standards—and that you get annual audits.

There were 783 tracked U.S. data breaches in 2014. That’s more than two breaches every single day of the year.