Category Archives: Best Practices

Thanatos Trojan removes competing malware

Thanatos Trojan removes competing malware

Infostealer.Dyre infects Windows machines and steals banking credentials.

Thanatos is the latest strain of malware that has been discovered in the underground hacking market, which sports the ability to delete competing malware from infected targets. 

Thanatos, which means the personification of death in Greek mythology, was discovered on 6 March by security firm Proofpoint, and it strives to market itself as a ZeuS banking Trojan alternative, but also advertises its malware killing capabilities.

According to an ad in an underground hacking forum, Thanatos works on all Windows versions and does not require admin privileges. It is also capable of evading anti-virus detection and is 32- and 64-bit friendly, and is written in C++, Masm, and Delphi, similarly to ZeuS, which, coincidentally, had its source code leaked.

The Trojan’s main functionality is its FormGrabber module, which can inject data inside the processes of popular Web browsers such as Internet Explorer (7-11), Firefox (all versions), Google Chrome (30+, except version 47) and even the newer Edge.

Its creators have revealed it is not yet compatible with Opera and Safari, but they are working on expanding support for these browsers.

Thanatos’ malware-killing component enables the downloader module to fetch and install other software, along with an AV-Module that behaves as an anti-virus, scanning the infected target for other known malware and deleting it.

To ensure that the malware it detects is actual malware and not a false positive, Thanatos will store a copy of the suspicious files and upload it to VirusTotal for confirmation, this is a first for Trojans taking such action.

All of this is available as a malware-as-a-service offering for $1,700 per month, or at $12,000 for a lifetime license.

Thanatos Trojan removes competing malware

Ransomware Incidents’ Key Considerations

Ransomware Incidents’ Key Considerations

ransomware-incidents’-key-considerations

Ransomware attacks are on the rise and likely to become more painful and frequent as attackers are finding that organizations are not well prepared to defend themselves and are often willing to pay handsomely to end the incident. Ransomware attacks commonly include an attacker using malware software or code that both encrypt data files with strong encryption and replicate and propagate themselves quickly throughout networks to maximize their presence and impact. The attacker will harvest the encryption keys needed to decrypt the data and hold onto them in exchange for ransom. If victims do not pay, they will not be provided the encryption keys required to decrypt and access their data.

Organizations that consider the threat of a ransomware attack to be both likely and materially business impacting should consider a number of issues to limit the impact of these attacks and respond effectively. Here are 5 key considerations when evaluating this threat scenario and response plans:

Managing the Risk: To Pay or Not to Pay?

—One of the most difficult decisions that an organization has to make is whether or not to pay the attacker to gain access to encryption keys or the other methods to regain access to its data. Often, organizations pay only as a last resort, but it is a risk-management decision that needs to be considered carefully. In some cases, it may be economically and operationally more efficient to pay the fees to gain access to data than it is to try to restore data and systems. That said, if an organization pays and it becomes known either to the attack community and/or the public, the organization may become the target of similar attacks due to the perception that it paid once and may do so again.

The decision to pay or not to pay should be considered by decision makers prior to an attack. Organizations should establish thresholds to identify their risk appetite for this type of situation, and factors such as loss of productivity, availability of data, reputational impact and cost of recovery should be considered. These thresholds should be designed to establish both at what point in the attack and at what price it is favorable to pay.

Negotiation

—It is often the case that a ransomware attack will include a demand for large amount of currency, often Bitcoin, to provide the mechanism to release the data. If an organization decides that it is willing to pay the attacker, it should engage in a dialogue with them, if possible, to negotiate the fee. The adversary is often more interested in getting some money rather than no money. The fee that an organization is willing to pay should be based on the projected costs of remediation of the incident without the help of the attacker. If the organization is able to negotiate a fee that is lower than this cost, the decision to pay may be an easier one.

Is this the beginning or end of the attack

Recent attacks have demonstrated that attackers are using ransomware-style attacks as the last module of a multifaceted attack strategy. The encryption of data can be used to distract an organization from other attack actions and activities and to cover the attacker’s tracks as they are trying to escape with data assets or implanting malware tools to be used in the future. Ransomware attacks are obvious and intended to make it known that the adversary has successfully exploited the organization. It is important to recognize that the remediation of a ransomware attack should always be followed up with a thorough investigation to ensure that the attacker did not carry out any other malicious actions as part of their attack or leave capabilities behind to carry out the same attack in the future.

Are backups good enough and should they be used?

—Often, the way organizations recover from a ransomware attack if they choose not to pay is to restore their data from backups. This assumes that the backups are comprehensive, have integrity and are recent enough to be useful to the organization. It is important for an organization to consider whether its backups are already infected with the attack malware/code or if the backups are susceptible to being affected by the original attack. A sophisticated attacker will implant their attack capabilities on systems and allow them to lay dormant for a significant period of time, hoping that they will propagate throughout the backups of the organization. Once the backup is restored, they will attempt to use their attack method again.

One way to defend against this scenario is to only backup data files and not system files to limit the possibility of reinfection. The attack code may be included in the data files, but an action would have to occur for it to be installed and operate again. Ideally, the method of exploitation and attack would be positively identified prior to recovering the backups.

Identify when networks and systems should be segmented and/or disabled

—Malware/attack code associated with ransomware activities often attempts to replicate and propagate itself across systems and networks as fast as possible to increase its effectiveness. In many cases, organizations use resources such as shared storage and network file shares that are easily leveraged by modern ransomware tools, such as the popular Cryptowall. It is important to identify when it is appropriate to segment and/or disable networks and systems to contain the attacker. Doing so can have significant business impacts. The conditions and scenarios that qualify for these actions should be discussed and agreed upon in advance with business process owners and leaders.

Preparation is key to a successful response for any attack scenario, but especially for a ransomware attack. These attacks and the decisions and actions that an organization is required to take to effectively respond to them go well beyond technical considerations and often fall into the realms of both enterprise and information risk management. Regardless of what decision is made, business leaders need to be aware of the broader considerations associated with this type of attack to ensure they are not targeted by the original or different adversaries in the future.

Ransomware Incidents’ Key Considerations

EMC launches DD VE

EMC launches DD VE, virtual edition of Data Domain

EMC-Data-Domain

EMC unveils the virtual edition of its Data Domain disk library, DD VE, allowing the Data Domain operating system to run inside a VMware hypervisor and back up to any hardware.

EMC today released a software-only version of its Data Domain disk backup product to handle data deduplication and replication to industry-standard hardware.

Data Domain is EMC’s market-leading disk backup library platform. Data Domain Virtual Edition (DD VE) decouples the software from the Data Domain hardware. DD VE is a virtual appliance that installs inside a VMware hypervisor and uses the Data Domain operating system to reduce data capacity during backups. Customers supply the target hardware and still need backup software, just as they do with physical Data Domain appliances.

A DD VE license includes Data Domain’s inline deduplication, replication, encryption and DD Boost for faster backups.

DD VE’s use cases, product details

EMC is pitching DD VE mainly for remote offices, although it can also be used by cloud providers and to protect data on hyper-converged systems.

DD VE protects from 1 TB to 16 TB of data, so it can conceivably replace the smallest Data Domain physical library in an organization, but not the larger libraries, which scale to hundreds of terabytes. Customers can scale DD VE in 1 TB increments.

At $1,675 per terabyte, the raw cost of the software version is no less than the physical appliances. For instance, a 4 TB DD VE costs $6,700, compared with the list price of $6,806 for 4 TB of capacity on a DD2200. For 16 TB of DD VE, the cost is $26,000, compared with the $23,027 list price for the maximum of 17.2 TB on a DD2200. The DD VE pricing doesn’t include hardware.

But DD VE allows customers to install and scale Data Domain software in use cases where it was not previously available or convenient. It could save customers money when backing up small amounts of data in many sites, instead of installing a Data Domain physical library in each one. A DD VE license can be spread among sites or hardware appliances.

“Being software-defined makes the world much more accessible for us,” said Caitlin Gordon, EMC’s director of product marketing for data protection. “Software-defined storage can lead to a lot of exciting things, like the ability to deploy in a cloud. Software-defined means you can deploy it anywhere.”

EMC is offering electronic licensing for DD VE. Customers can buy and license the amount of capacity they want by downloading OVS files. Starting April 22, EMC will offer a 0.5 TB try-and-buy trial for nonproduction use. The free trial version has only community support.

The software will do an assessment of the target hardware when it installs to make sure it is compatible. EMC will also publish a hardware compatibility list. EMC recommended customers use DD VE with a RAID 6 scheme.

There is no support for deduplication across devices in the original release.

Gordon said EMC will expand capacity for DD VE “relatively quickly,” as well as add features.

DD VE opens ‘new markets’ for EMC

The DD VE release is no surprise. EMC executive Guy Churchward spoke of EMC running it internally in an interview nearly two years ago, and EMC Information Infrastructure CEO Dave Goulden said in January the product would be generally available soon. Dell, Hewlett Packard Enterprise and Quantum already have virtual backup appliances.

Gordon said EMC has been running DD VE in the lab for so long that the initial release is actually DD VE 2.0. DD VE has also had a long customer technical evaluation program.

“For EMC to deliver a virtual appliance wasn’t overly challenging, but it does open new markets for them,” said Jason Buffington, principal analyst at Enterprise Strategy Group Inc., in Milford, Mass. “This unblocks some of the more edge or fringe scenarios for people who may have wanted a Data Domain box, but wanted a different form factor.”

Buffington said hyper-converged customers and managed service providers(MSPs) are good DD VE candidates, along with remote offices.

“If you put all your eggs in a hyper-converged basket, do you really want another basket — a physical Data Domain — next to it?” he said. “If you already believe in virtual environments, why not put DD VE in?”

Buffington said the attraction for MSPs is they don’t have to try and sell customers on a multi-tenant setup. They can give each customer a dedicated DD VE for their data.

A successful test case for DD VE

Ben Leggo, general manager of cloud services for Australian managed services provider Tecala, said he has tested DD VE for four months with a variety of hardware and backup software. He said the virtual version worked as well as the software in his physical Data Domain boxes, and he intends to put it in production for several data protection services.

“Our principal use case will be for branch office backup,” Leggo said. “It’s quite expensive to deploy hardware [at branch offices] for the amount of data you want to back up. If a customer’s already virtualized, we can deploy this quite cost-effectively and replicate out to physical Data Domain devices.”

Tecala has DD4500 libraries in its data center and DD2200s at customer sites. Leggo said when EMC supports 70 TB with DD VE, he will consider replacing physical Data Domains in the data center. He said he is also looking for EMC to add public cloud support to DD VE so Tecala can offer services that back up data to Microsoft Azure.

Gartner distinguished analyst Dave Russell said DD VE can be a quick way to configure disk backup for small implementations, but customers would probably not want to scale it too high on their own hardware.

“This is good for rapid deployment and rapid redeployment of resources,” he said. “Today’s project might be 10 remote offices, but two quarters from now, it might be something different and you can reconfigure DD VE on the fly. But software-defined puts some of the onus on the IT shop to be a product integrator themselves. The bigger the platform capacity, the more onus on the customer to deploy it on the right kind of hardware.”

EMC launches DD VE, virtual edition of Data Domain

Leaf-Spine Network Architecture

Leaf-Spine Network Architecture

leaf-spine-network-architecture

Leaf-spine is a two-layer network topology composed of leaf switches and spine switches. Servers and storage connect to leaf switches and leaf switches connect to spine switches. Leaf switches mesh into the spine, forming the access layer that delivers network connection points for servers. Spine switches have high port density and form the core of the architecture.

Every leaf switch in a leaf-spine architecture connects to every switch in the network fabric. No matter which leaf switch a server is connected to, it has to cross the same number of devices every time it connects to another server. (The only exception is when the other server is on the same leaf.) This minimizes latency and bottlenecks because each payload only has to travel to a spine switch and another leaf switch to reach its endpoint.

A leaf-spine topology can be layer 2 or layer 3 depending upon whether the links between the leaf and spine layer will be switched or routed. In a layer 2 leaf-spine design, Transparent Interconnection of Lots of Links (TRILL) or shortest path bridging takes the place of spanning-tree. All hosts are linked to the fabric and offer a loop-free route to their Ethernet MAC address through a shortest-path-first computation. In a layer 3 design, each link is routed. This approach is most efficient when virtual local area networks are sequestered to individual leaf switches or when a network overlay, like VXLAN, is working.

8 ways to get the most out of your ERP system

8 ways to get the most out of your ERP system

Experts in ERP software share tips on choosing, deploying and managing an ERP system

erp-get-the-most-out-of

Implementing an ERP system is an expensive proposition that can cost millions of dollars (with the software licenses, consulting fees, integration costs, training and upgrades factored in) and take months, even years, to fully deploy. And, according to a report commissioned by Panorama Consulting, one in five ERP implementation projects (21 percent) end in failure.

So what steps can you take when choosing or deploying an ERP solution to decrease the likelihood of failure? Here are eight suggestions from ERP experts, systems integrators and project managers who have successfully implemented an ERP software system.

1. Choose an ERP solution that best suits your business’s needs (and industry requirements). “Because it’s a major investment of time and money, be sure to create a comprehensive list of requirements with input from multiple key stakeholders,” says Sumanth Dama, CTO, CuroGens, a systems integrator.

“Define your company’s core needs,” says Evert Bos, solution architect, Sikich, a professional services firm. “Your system requirements should be a short list of functional and operational must-haves and an equally short list of key strategic requirements that will support your future growth. A well-defined list of requirements will result in a more seamless implementation process and an ERP system with lower total cost of ownership.”

“Seriously consider how [the system] would fit into your current IT infrastructure,” says Dama. And “ensure that [the] ERP [you select] can support the regulatory requirements of your business – [and if] you will have to pay big dollars for customizations. Make your choice not solely upon price or vendor alone.”

2. Look at total cost of ownership (TCO). “My best advice to a cost-conscious [organization] deciding on an ERP system is to really look at total cost of ownership,” says David Valade, vice president of operations at Alta Vista Technology. “Yes, the software itself costs money, but as you compare costs for different solutions don’t forget about hardware, consulting dollars to implement, internal resources to maintain the software and future upgrade costs that come with it. I know a company that doubled the size of its IT department to maintain an ERP solution that initially looked like a bargain.”

3. Vet vendors/integration partners carefully. “Choosing the right vendor is a major part of the process,” says Dama. “Ensure the vendor has done their homework and can get the job done to your specifications. Be sure they’re prepared to give specific estimates regarding TCO and the ROI you can expect for your business and in what period of time,” he says. “Discuss in detail the budget and project completion time and require references from other organizations that have hired them. Also be certain they have the proper experience serving your specific industry.”

“When embarking on an ERP implementation, it’s essential for your software and service provider to know your business,” says Paul Magel, president, business applications and technology outsourcing division, CGS. “Every industry has its own requirements, from terminology to specific functionality. A one-size-fits-all ERP will leave you with many gaps and needed customizations,” he says. “Whether you are in a specialized vertical such as footwear or are a well-known apparel manufacturer, ensure your vendor is an industry expert.”

4. Make sure senior management is on board. “One key predictor of how your ERP implementation will go is upper management buy-in,” says Kirk Heminger, marketing manager, Penta Technologies. “Senior management [doesn’t have] to be integrally involved in every step of the implementation process. However, management involvement and backing in prioritizing the project, setting direction, allocating resources and facilitating communication can be the single most important success factor in any implementation process.”

“It’s imperative the team tasked with choosing a new ERP system collaborate with senior leadership team or a member of the C-suite, where one individual from [the] executive team can serve as the sponsor and internal champion of the project to streamline approvals and break down internal silos,” says Rick Hymer, vice president, North America service line leader, packaged based solutions, Capgemini.

5. Develop a roadmap. “A roadmap not only ensures a smooth process, but it shows financial stakeholders when the implementation will start, the cost and when they can expect to start seeing benefits,” says Ramesh Iyanswamy, global head of SAP HANA, Tata Consultancy Services. “Roadmaps help manage large, complex technology and business process transformations in a series of well-defined phases, and help take control of cost planning. [Use the] roadmap [to] phase deployment over a period of shorter go lives, spreading out the cost over years.”

“It is critical to establish a well-defined footprint for your ERP implementation,” says Bos. “Define the functional areas of your organization that will be both affected and unaffected by the ERP system. For the affected areas of the organization, detail the ERP functionality that will be deployed and the business case for expected improvements,” he suggests. “Once this is complete, it is critical to exercise strong control over the scope of the project. Changing the scope of an ERP system should be a difficult process.”

“Putting together a detailed roadmap that outlines what steps the business needs to take before transitioning to the new system will help pinpoint potential ramifications and identify ways in which to mitigate those speedbumps from the get go,” says Hymer.

6. Establish a cross-functional team to oversee implementation. To help smooth the implementation process, companies should set up an interdepartmental implementation team. “This team should be made up of individuals [from different departments,] such as accounting, operations, information technology, payroll, human resources, purchasing, service and inventory,” says Heminger. “The team needs decision-making authority and a clear escalation path when facing decisions beyond their authority.”

“By building a cross-functional team, companies not only improve the likelihood that all areas of the business are addressed, but also help create buy-in that can drive the overall project’s success,” says Christine Hansen, senior manager, product marketing, Epicor Software Corporation. “All cross-functional teams should include certain key organizational functions such as project management, IT and executive management.”

7. Deploy the ERP one module at a time (not all at once). “For example, implement your accounting applications first, such as your general ledger, payables and receivables,” says Allan Alutalu, a project manager for Fujitsu Consulting. “Once those modules are up and running, implement your other modules, such as HR and payroll. Then implement your satellite modules and other add-ons. Many organizations have failed [in] their ERP implementations by using the big bang approach and attempting to convert all their systems to the ERP all at once – a big, costly mistake.”

8. Don’t skimp on training. “The most critical element of an ERP rollout is the people,” says Curtis Thornhill, CEO, Apt Marketing Solutions. “Create a full plan to train and onboard stakeholders. Ensuring implementation resources remain assigned for the first six weeks is essential to supporting end-users, managing change requests and addressing points of failure. Appreciable support is the most important factor for ensuring successful adoption of a new ERP system.”

“Make a solid investment in training,” says Hansen. “This is another key element to successful ERP deployments and should not be overlooked. If employees are undertrained, they will often fail to take full advantage of the tools they’ve been given, and can become frustrated, abandoning the system and resorting to manual workarounds that drain time and resources and create islands of information,” she explains. Therefore, it’s important to create “a training program that is appropriate to each user’s role within the company, with specific goals and outcomes defined.”

8 ways to get the most out of your ERP system

Quantum cloud computing arrives next decade

Quantum cloud computing arrives next decade

quantum computing arrives next decade

Cloud-based quantum computing could be helping to solve big science problems within the next decade, according to Bill Gates.

Speaking during an ‘Ask me Anything‘ interview on Reddit, the Microsoft founder was optimistic about the future of the nascent technology.

“Microsoft and others are working on quantum computing. It isn’t clear when it will work or become mainstream. There is a chance that within 6-10 years cloud computing will offer super-computation by using quantum,” he said.

Enterprise ERP reaches for the cloud

Enterprise ERP reaches for the cloud

erp-cloud-computing

While small and midsized businesses have led the way in migrating their ERP applications to the cloud, enterprises have been lagging behind.

And there’s one very good reason for enterprises not plunging ahead with SaaS ERP — it’s hard.

ERP systems at large companies are vast, complicated, and deeply entrenched in an organization’s infrastructure. ERP is so integral to an organization that the time, expense and disruption a move to the cloud might entail has discouraged IT managers from undertaking the task.

Inside: What you need to know about staffing up for IoT, how cloud and SDN set Veritas free & much more!

As a recent report by Forrester Research puts it, “Whereas thousands of smaller and midsize companies have already adopted SaaS ERP systems, enterprises are in the very early adoption stages. Several leading software suppliers are aggressively investing in SaaS ERP capabilities that will appeal to multinational enterprises as customer demand accelerates.”
The report is based on a survey of 770 North American and European technology decision makers. In 2012, 12 percent had already replaced or planned to replace within two years their traditional ERP with SaaS ERP. In 2014, the number grew to 35 percent.

“ERP has seen a significant level of adoption, but certainly much less than customer-oriented apps like CRM, as well as other areas like procurement and HR. The functional breadth of ERP has not been fully replicated in SaaS-native solutions to the extent it exists in mature on-premises ERP solutions. Also, the ability to support industry-specific scenarios and company-specific requirements requires a high degree of configurability and extensibility in the SaaS platform in lieu of customization. This is starting to materialize,” says Paul Hamerman, vice president and principal analyst at Forrester.

While the number of SMBs moving ERP software to the cloud is higher than the number of enterprises doing so, the enterprise number is on an upward trajectory, says Robert Anderson, vice president at Gartner.

“The number of 30% by 2018 across all enterprises still holds true. If you look at the SMB segment alone, it may be 40% or more accounting for those that will move current on-premise ERP solutions to cloud and an entire net new range of service-centric businesses that will embrace it that haven’t experienced ERP before. By 2020, it should be well on its way to 50% of SMBs in the services sector having embraced some form of cloud ERP apps,” he says.

For SMBs, according to Aberdeen Group, the shift to lower costs, as well as increased scalability, flexibility and ease of use have spurred cloud deployment. In a survey, Aberdeen found that midsized organizations are more than twice as likely to have cloud ERP solutions than legacy ones. Aberdeen says those that have moved ERP to the cloud have seen significant advantages, including: 1.9 times improvement in profitability over the past 24 months; 3.2 times improvement in time to decision over the past year; 1.8 times improvement in cycle time of key business processes over the past year; 33 percent greater improvement in complete and on-time delivery; and 2 times improvement in internal schedule compliance.

cloud-erp

Will enterprises hit those numbers?

“Large organizations will see improvements as well, but not as high as these,” says Nick Castellina, research director at Aberdeen. “That’s because they’re large, multitier deployments. The reason it has taken ERP longer to move to the cloud is because it’s a more robust application, and more integral to the day-to-day operation of the organization, so making those changes will take more time.”

According to Gartner’s Anderson, there are several reasons for enterprise IT being slow to jump to cloud ERP. “Earlier concerns about moving the ‘crown jewels’ or ‘bet your business applications’ off premises were based on security, among a range of other things.”

In addition, it could literally take years to shift large, complex, on-premise code bases to the cloud. And some ERP cloud vendors required almost ground-up rewrites of the application.

Anderson attributes the latest growth spurt toward cloud-based ERP to advances in several technologies, including rapidly evolving enabling technologies, including in-memory database, mobility, social and collaborative tools, embedded analytics and consumer-like user experiences, all rolled into the latest ERP systems. He calls these “post-modern” ERP.

“In a sense, ‘the sun, stars and moon’ had to come together and align and that was the real tipping point,” he says.

Patience is required

At City Harvest, which collects and distributes food to a network of New York City community food programs, IT director James Safanov says, “ERP is taking longer in general to move to the cloud due to the number of stakeholders companies have. If it were just out-of-the-box features, it would be easier. More people need to say when to move, organization-wide,” he says.

For organizations that have moved their ERP software to the cloud, many IT managers report that it was done primarily to simplify IT operations. At City Harvest, for example, Safanov wanted single sign-on accounts, part of the Azure platform.

“Our allocation software and delivery software are both hosted. We evaluate each product, and where it makes sense to move up, we move it. We’re moving applications where we can, while still concentrating on our mission. This means we don’t have to invest in capital acquisitions. We don’t want to manage the Exchange Server on premise, for example, so we moved that to the cloud. Now, we have Office 365 with Exchange Server and SharePoint, Dynamics GP, and Azure services,” he says.

At SFX Entertainment, a producer of live events, media and entertainment content, a combination of acquisitions and aversion to managing its IT infrastructure in-house led to a move to the cloud.

“We don’t want to be in the business of using 100 people to provide non-core, critical services. Second, we have acquired more than 20 companies over the last two years, so in addition to our 625 employees, we have 100 subsidiaries and 30 to 40 brands, and we wanted a system that could be agile, and not take the bandwidth of people in operations. With multiple lines of business that cross over, we needed the flexibility to take all of our businesses and have a standard baseline, and we can uniquely tailor this,” says Madhu Madhavan, vice president, financial systems and technology at SFX Entertainment.

“In 2013 we were growing fast, and into some unanticipated corridors. NetSuite was chosen so we could migrate away from 15 different systems, some of which were on premise. These include bookkeeping on multiple applications, reporting with a combination of on premise apps and SAP,” he adds.

Shaw Industries, a $5 billion flooring manufacturer, is a good example of a global company with a sprawling IT infrastructure. A recent expansion in China provided the opportunity to use SaaS to avoid the need to have its ERP on premise. Over the course of 25 years, IT staff built an integrated system, but the Chinese language and currency would have required massive customization, says Randy McKaig, vice president and CIO, information services.

“Three years ago, Oracle and SAP were not true SaaS solutions. We needed global language support, and didn’t want to add software and IT people everywhere. Our packages are custom, and we need full integration. NetSuite guaranteed links wouldn’t be lost or broken during upgrades, and we have the flexibility to go anywhere in the world. The way they have architected it, if we put customization into their engine, into the core logic, they have it set up from the ground up as a true SaaS,” he adds.

Any cloud deployment costs money, but as with any other application moved to the cloud, over the long term, organizations come out ahead.

“It cost money to migrate to cloud and over the long haul a business might eventually pay as much as if they had purchased an on-premise solution. That said, the ability to spread out those payments in a predictable manner, release themselves and their people from IT burdens to focus on their quickly changing business requirements and having a turbo-charged ERP experience that doesn’t just handle business transactions, but actually helps more users make better decisions at every stage, typically provides a big boost. Even if the longer term cost between the two deployment methods cancel each other out, the benefits the business will have gained will be recognized as well worth it,” says Gartner’s Anderson.

At SFX Entertainment, not only have costs come down, IT staffers have found NetSuite’s cloud ERP easier to learn and use, and that has made for happier employees.

“True cost of ownership is comparable on premise vs. NetSuite. We’re growing fast, and the cost of integrating operations through these systems, and in a group of facilities and how quick we can do it was shocking. The cost of deploying these systems is 33 percent of what it would have cost. We’ve also seen a great reduction in the learning curve, quick deployment, and it’s been huge for morale. In effect, they’re all at the same table. They’re not getting information from all these different systems. That’s huge because it lets them get in and get in fast,” says SFX’s Madhavan.

Once ERP is moved to the cloud, an organization will realize the same benefits as with any other application, namely offloading work that would have been done by IT staffers.

“ERP in the cloud will significantly reduce the level of internal IT support required, because you are, in effect, outsourcing infrastructure, hosting, and software updating/upgrades to the software vendor. The cost impact is a case-by-case issue, depending on what is being replaced, volumes of activity, etc. My experience is that SaaS ERP is not necessarily a path to lower costs, but often has significant other benefits in flexibility and sustainability,” says Forrester’s Hamerman.

“The power of ERP goes up exponentially when complex processes that used to take 12 screens can be whittled down to one or two very visual, role-specific, intuitive screens. So what follows is that more people in the business are touching the system, making the overall business more efficient and effective. Likewise, that power goes up even more when I can interact with the ERP from anywhere in a simplified manner, whether it be on my smartphone, tablet or laptop. You begin to add other elements such as real-time, context aware embedded analytics for improved decision support and social tools for internal and external collaboration and the entire ERP value proposition changes,” says Gartner’s Anderson.

“For us, the move wasn’t so much about profitability, but efficiency. It’s faster, our staff can allocate foods quickly and more efficiently. We have less waste. We’ve gone from an already low 1.9% down to 1.2%. On-time deliveries increased by 15.8%,” says Safanov at City Harvest.

Nurturing Creativity

Nurturing Creativity

nurturing-creativity

As a creative person at a startup or larger company, it feels surreal to do what you love and belong to a team that’s pursuing a worthwhile endeavor.

Chances are, you were hired because you possess the traits of ownership, autonomy, creativity, and grit. Owning your output, shipping on time, and finding your own answers to questions makes your Mondays meaningful.

Alas, like all creatives, you are also prone to the comfort of routines, especially when your team grows and you have more assets and specific processes to follow. When this happens, it can become difficult to be objective about your work. It’s important to assess where you stand—what is the creative quality of your work and, more importantly, what must improve?

Every now and then I need to revisit the wisdom of my favorite entrepreneurs, artists, and thinkers so that I can continually improve my craft—both for my own creative soul and for the betterment of my career and company.

1. Break Out of Your Craft

In her book On Looking: Eleven Walks with Expert Eyes, cognitive scientist Alexandra Horowitz describes how she walked around her local city block with various professionals. Each noticed something different based on his or her expertise: the geologist noticed all types of different stones in buildings and sidewalks, and the typography expert pointed out all the surrounding words.

The bias of our perspective, Horowitz says, is rooted in our expertise; it influences the way we see the world. “The psychiatrist sees symptoms of diagnosable conditions in everyone from the grocery checkout cashier to his spouse; the economist views the simple buying of a cup of coffee as an example of a macroeconomic phenomenon.”

This insight made me realize that it’s important to step out of my own skill set from time to time.

Step into the worlds of your teammates and study their processes. For example, looking over the shoulder of our design team helps me understand their methodology and, in turn, provides insights on what I can tinker with to make the whole process—my creative work included—better.

2. Be Okay With Imperfect Work

Plato once theorized that all things are produced by nature, chance, or art. The most beautiful and greatest is done by nature or chance; the least and most imperfect is art.

Does that bother you?

It drives me mad. Any creative takes his or her work personally. It is emotional labor based on our experiences and the way we see the world (and the way we want others to see the world).

The one thing that makes creative labor scary is the lack of reassurance. Today’s great essay may be tomorrow’s greatest flop. No one knows what a perfect headline or website looks like.

But again, that’s merely one narrative, one way to view your work.

The lack of reassurance has another side to the coin: an opportunity to realize that your work is never complete, and knowing this, you learn something new about yourself and the domain every time you ship a project. As a creative, this should be energizing rather than debilitating.

The choice for how you want to see it is yours.

3. Strive to Be Indispensable

In Linchpin: Are You Indispensable?, marketer Seth Godin urges a new mindset that separates you from being a mere cog in an organization to being a linchpin:

If your organization wanted to replace you with someone far better at your job than you, what would they look for? I think it’s unlikely that they’d seek out someone willing to work more hours, or someone with more industry experience, or someone who could score better on a standardized test. No, the competitive advantage the marketplace demands is someone more human, connected, and mature. Someone with passion and energy, capable of seeing things as they are and negotiating multiple priorities as she makes useful decisions without angst. Flexible in the face of change, resilient in the face of confusion. All of these attributes are choices, not talents, and all of them are available to you.

This attitude is a long-term game—it’s about the narrative you tell yourself. Be aware of what your narrative is, and with a focus on being indispensable as a creative force in an organization, make smart decisions that help you embrace that identity.

. creative-process

4. Be Conscious of Your Daily Schedule

“Inspiration is for amateurs,” painter and photographer Chuck Close famously said. “The rest of us show up and get to work.”

There is perhaps no other tribe more obsessed with others’ personal routines and rituals than creatives—a visceral desire to peek into the lives of great artists and to borrow any or all ideas that champion creativity.

Nowhere is this insight more compelling than from psychologist Mihaly Csikszentmihalyi in his book Creativity: The Psychology of Discovery and Invention.

He explains why habits and schedules are so crucial for creative work:

The implications for everyday life are simple: Make sure that where you work and live reflects your needs and your tastes. There should be room for immersion in concentrated activity and for stimulating novelty. The objects around you should help you become what you intend to be. Think about how you use time and consider whether your schedule reflects the rhythms that work best for you. If in doubt, experiment until you discover the best timing for work and rest, for thought and action, for being alone and for being with people.

This is the foundation for your creative process. Hone it, figure out what works for you, and honor it so that you enable yourself to do the work.

5. Find Your Flow

Anne Rice, author of Interview With the Vampire, said about writing, “It’s always a search for the uninterrupted three- or four-hour stretch.” Uninterrupted time is not only valuable in writing, but in all kinds of work.

Mihaly Csikszentmihalyi championed this notion when he realized that this experience was felt and described in a similar manner no matter the person—religious mystics, scientists, artists, and ordinary working people describing their most rewarding work experiences.

Poet Mark Strand captured the idea of flow beautifully when interviewed by Csikszentmihalyi for his book referenced above:

[When] you’re right in the work, you lose your sense of time, you’re completely enraptured, you’re completely caught up in what you’re doing, and you’re sort of swayed by the possibilities you see in this work. . . . When you’re working on something and you’re working well, you have the feeling that there’s no other way of saying what you’re saying.

This ties into the necessity of understanding your schedule and daily rhythms. For example, all my writing gets done in the morning. If I can get into the flow for three or four hours, then my work is done; the latter half of the day can be spent reading and researching.

Finding your flow boils down to managing your daily schedule, finding a space in which you can do your work, shutting off your phone or other distractions, and enjoying the process.

6. Chase Your Curiosities

“Curiosity pleases me,” said physicist Alan Lightman in The Accidental Universe. “It evokes . . . a readiness to find strange and singular what surrounds us; a certain restlessness to break up our familiarities.”

Creative breakthroughs happen not on a singular path, but at the intersection of multiple domains, cross-pollinating ideas and concepts that are seemingly unrelated.

Avoid the self-sabotage of immediately labeling an idea or a curiosity as bad. Put it through the gauntlet first: ask your team for feedback and execute the idea. The only way you’ll know if it’s working is if you ship.

7. Finally, Rest

There is an arrogance that lingers in business and possibly in our culture at large—the self-congratulations for running on a lack of sleep.

Nothing is as fruitless and self-destructive as walking around claiming that you’re doing meaningful work when you’re bone tired. Nobody’s impressed.

Curator and writer Maria Popova described the importance of sleep in her essay 7 Things I Learned in 7 Years of Reading, Writing, and Living:

Most importantly, sleep. Besides being the greatest creative aphrodisiac, sleep also affects our every waking momentdictates our social rhythm, and evenmediates our negative moods. Be as religious and disciplined about your sleep as you are about your work. We tend to wear our ability to get by on little sleep as some sort of badge of honor that validates our work ethic. But what it really is a profound failure of self-respect and of priorities. What could possibly be more important than your health and your sanity, from which all else springs?

Every day, you have a window (albeit small) to do truly meaningful, creative work.

Be realistic: no creative person works for eight hours straight (or should). The best you’ve got is what Anne Rice described—an uninterrupted three- or four-hour stretch. To harness that, you not only need a pocket of space and time to do the work, you need the prerequisite of rest.

Disconnected VDI Advantages

Disconnected VDI Advantages

Many virtualization vendors are touting the advantages of a virtual desktop infrastructure (VDI) that supports disconnected users — users not connected to the corporate network. But disconnected VDI does more than just make virtual desktops mobile

disconnected offline vdi

1. VDI puts the “personal” back into personal computing
Disconnected VDI can be set up to store a user’s local files and preferences. These settings can follow users from machine to machine, providing them with the level of desktop customization they are used to.

2. Improved desktop performance
In high-latency, low-bandwidth environments, checked-out systems perform better than systems hosted in the data center. Remote offices can overcome many performance problems by primarily running in disconnected mode and then synchronizing desktops on a regular basis for backup.

3. Less hardware costs
Since virtual machines can run on a variety of hardware, you may not need to upgrade your systems. In addition, each user may only need a single notebook computer. Users can check out their virtual desktops when on the road, and when they are in the office, they can run it in a hosted mode.

4. Easier desktop management
Administrators can centrally manage virtualized desktops in the data center, eliminating the need for additional desktop management tools. VDI supports cloning of virtual PCs, offline modifications and online monitoring. Admins can make changes to a virtual PC and have those changes automatically deployed upon checkout.

5. Reduced deployment costs
The cost of deploying new virtual desktops is lowered because of virtual machine cloning utilities and advanced deployment utilities. Both of these tools ease application installations.

6. Lowered data leakage
Administrators can embed additional security, encrypt virtual hard drives and apply security policies to virtual desktops, which helps prevent data loss and data theft. Policies and security controls can be active for both checked-out and checked-in systems.

7. Preserved business continuity
In a disconnected VDI, users can still access desktops if there is a problem with the data center, such as a natural disaster or another emergency. The virtual desktops simply need to be moved to other media and then checked out to users.

8. Centralized backup
Hosting virtual desktops in the data center allows them to be automatically backed up. When a user checks out a desktop and makes changes to it, those changes are rolled back into the data center when the desktop is checked in.

9. Simplified system migrations
Since virtual desktops are created and maintained in the data center, to upgrade a user’s desktop, all you have to do is assign him a new virtual desktop.

10. Users can work from “virtually” anywhere
Mobile users can “check out” their virtual desktops and run them on notebook computers from the road, since offline virtual desktops do not require Internet or network access to function.

Top VDI challenges

Top VDI challenges

vdi-problems

VDI isn’t easy to stand up and deploy, and a lot of people have complaints about it: It’s too complex, it costs too much, performance stinks and admins are hard to find.

For all of the heavily hyped advantages of VDI, there are still a slew of drawbacks, concerns and challenges to successfully implementing and supporting the technology.

VDI and desktop virtualization aficionados often sing the technology’s praises, but implementing it isn’t a walk in the park. It’s hard to get a deployment off the ground, and some projects never make it out of the pilot or proof-of-concept phases. Even when companies do roll VDI out, it doesn’t always succeed.

Many people can — and do — gripe about VDI challenges. Here are the seven most popular complaints:

Complexity

One of the main selling points for VDI is the theoretical nirvana of a centrally controlled and administered desktop that should be — if implemented correctly — much more secure than traditional standalone desktops running on end-user computers. But at what price and complexity do we achieve that theoretical nirvana?

In most circumstances, if your VDI back-end infrastructure suffers an outage, the majority of your users are unable to perform the work that generates revenue and keeps companies running. As a result, your servers, storage and network infrastructure must be highly fault-tolerant. That requires clustered servers, a highly available storage area network and redundant network links that mitigate the risk that a single point of failure can bring your VDI environment to a standstill. Of course, high-availability platforms have been protecting critical applications for many years. But the complexity of creating such a redundant, fault-tolerant environment is not trivial, and is a consideration for any company looking at doing VDI.

Direct – and indirect – costs

One of the first advantages of VDI that pundits and vendors touted was that it can save companies money in much the same way that virtualizing servers has become a cost-saving standard. But the equation turns out to be quite different in the VDI world. Back-end infrastructure must be fully redundant, expandable and fault tolerant. That can be an expensive proposition, depending on how many end users you support.

There are also costly challenges relating to application and operating system licensing. And don’t forget the unseen expenses that come from implementing a new application or environment: Change tends to make people less productive while they adjust to new features and restrictions. Expect some turmoil and confusion as you roll VDI out to any installed user base.

BYOD challenges

Each time you establish a connection from your local computer to your VDI environment, you are served a pre-built image of the operating system, applications and custom user settings. Assuming that end users all have similarly configured desktop or laptop computers, those images should run on most computers in an organization, though certain special cases may require custom images. But what happens when your users clamor for VDI support for mobile devices?

I can connect to my company’s VDI environment with a tablet or smartphone, but I obviously cannot run a Windows 8 desktop image on those devices. Some VDI technology allows you to support smartphones and tablets, but it is an issue that must be addressed in the modern, bring your own device (BYOD) world.

Data center density

There’s no getting around the fact that VDI is a resource hog in the data center. As the density of VDI racks and appliances continues to go up, so does the underlying need for power, cooling and network capacity in the data center.

Implementing resource-intensive services such as VDI is really about chasing data center bottlenecks until you have some marginal balance between cost and performance. If the data center network connection to the Internet is a VDI bottleneck, you can always install 20 or 50 or 100 redundant Internet links, but that will likely cost more than the entire VDI project. There is a delicate balance between providing enough power, cooling and network capacity to adequately run VDI while still keeping infrastructure costs under control.

Security

Because of the centralized control over desktops and applications that come with VDI, virtual desktop security should always be tighter. The keyword there is “should.”

Centralized control offers the opportunity for a very secure desktop environment, but VDI admins must carefully construct that security through management of end-user policies, physical security of the VDI back-end environment, and the ability to disable hardware and software that leaves traditional desktops at risk. Anti-virus (AV) scanning can also be centralized in a VDI environment, eliminating the need to run AV software and regular scans on local computers.

Admin challenges

VDI admins require a different skillset than traditional administrators because VDI is totally dependent on a tightly-integrated set of infrastructure technology and services. As your company considers transitioning to a VDI environment, your network, server, application and storage admins absolutely must work in unison for VDI to work as designed.

This dynamic can be difficult to instill in various IT teams that may not be accustomed to working closely with admins from other teams. Sometimes groups aren’t willing to work with other teams. No problem: You can always hire someone with the requisite VDI admin expertise, right? In an ideal world, companies implementing VDI would simply search the vast pool of available IT talent for someone with VDI expertise, but VDI is still new enough that finding the right person can be expensive and exasperating.

The laws of supply and demand will likely keep salaries for VDI admins elevated for the foreseeable future.

Performance

Degraded desktop performance has always been one of the criticisms leveled at VDI technology. Great strides have been made in improving performance and increasing the desktop density for back-end VDI components, but there are still performance challenges yet to be solved.

Even with VDI performance increases, there are still certain types of users for whom VDI may simply not be a viable alternative. For example, large files such as graphics and databases must be downloaded to the user’s machine each time they begin work, and must then be uploaded back into the virtual infrastructure when the user is finished. Considering the large amount of network traffic generated by VDI, these users may find that they require a traditional, locally-installed desktop with local storage of user files to get their job done.

The VDI community has made leaps and bounds in the installation, tuning and administration of virtual desktops over the last three or four years. Many of the early roadblocks to successful VDI implementations have been eliminated or addressed via VDI vendor education, VDI community discussions and refined project-planning techniques.