Whether it’s a UC project, a collaboration project or whatever, to ensure project success there are some steps that need to be taken by the integrator and the customer IT department. If the steps aren’t taken, prepare for the worst – a failed project! As we considered the Cloud in today’s world, we realized that all of these steps are still basically valid, even the testing step, although the scope or complexity of some of the steps may be minimized by Cloud delivery. As a systems integrator, I learned some of these lessons the hard way. You don’t have to!
Key Step #1: Make sure there is senior management buy-in
Don’t make the mistake of being pulled into someone’s “plans” without making sure upper management is on board with the need and the overall plan. Make sure budget has been allocated and know the name of the senior manager who is supporting the project. If you’re the integrator, make sure that this isn’t just a pet project of someone in the customer’s IT department – with no support and no funding.
Key Step #2: Involve end-users and stakeholders early and often
No one understands their needs and their processes better than end-users. And no one can de-rail a project faster than end-users who see no value in the project. Would we be wrong in assuming that end-users were part of the “needs analysis” and fact finding that went on in the development of the solution that is the basis for the project? End-user representatives should also be involved in the project, to ensure their buy-in and championing when the project is rolled out.
Key Step #3: Set expectations early
All stakeholders and end-users need to be educated on what is going to change, what the change will look like, how it will affect their own work routine and where they can go for help if they have problems. This should NOT be left until right before going live. Change is never well-received and efforts to minimize the concern and provide help will go a long way in ensuring that end-users will actually use the new solution.
Key Step #4: Minimum specs usually mean headaches at some point
As the systems integrator or VAR, don’t be guilty of trying to undersell the components necessary for the solution with the plan to go back during project implementation and “add on”. As the customer IT leader, don’t make the mistake of going after strictly “low bid” or the least costly solution unless you know for 100% positive that you are getting exactly what is needed and that everything will integrate correctly and seamlessly with your infrastructure. Here’s a good adage…. “You get what you pay for.”
Key Step #5: Create a detailed project plan
Formal project plans force the project manager, and everyone involved, to consider all the necessary phases and steps, and the order in which to proceed. In addition, they define accountability and responsibility – what are the integrator’s responsibilities and what are the customer’s responsibilities? Ever seen this saying…. "Failure to plan is planning to fail"?
Key Step #6: Schedule meetings only as needed and when key players are available
Meetings should be used expeditiously and should include all key players. Schedule them when everyone is available (can you say “group calendar”?) and have a specific agenda in place. Each meeting should have a specific goal or outcome – whether the goal is to resolve a problem with the project, assign additional responsibilities, or whatever. Never call a meeting to update on the status of the project – that can and should be done in writing, on a regular basis and shared with all stakeholders. We’re in the high-tech industry – use video conferencing and other technology tools to maximize communication and collaboration and minimize wasted time.
Key Step #7: Make sure adequate testing is included in the project timeline.
Testing is essential to project success. Advance testing should be done whenever possible at the integrator/VARs facility. Testing should be done again, and again as the implementation continues on the customer site. Once the project is complete, the customer should have a testing program of their own – using their own employees and a testing script.
Key Step #8: Have a plan in place in case the “go-live” or “cutover” fails
Heaven forbid a “go-live” doesn’t go as planned, but it happens every now and then. The integrator/VAR and the customer project leader need to have agreed ahead of time on what go-live success looks like -- and when it's time to admit failure and begin again another day. There always should be a backup plan in case a “go-live” fails and the failure issues can’t be resolved by the integrator and the IT department.
Key Step #9: Make sure the “go-live” or cutover is scheduled for minimum disruption and maximum support availability
OK – this sounds like an oxymoron because it generally means that the “go-live” will occur over a weekend (or heaven forbid, over a holiday). It is the responsibility of the integrator/VAR to make sure that their own support team and any additional support from vendors will be readily available if needed. It is the responsibility of the customer IT department to make sure that all relevant staff is available on site or easily accessible. Again, think “technology”. Video collaboration and conferencing? Presence to know exactly what expertise is available?
Key Step #11: Build in adequate training
While most integrators/VARs understand the importance of training, it is all too often deleted or skimped on in the proposal; or needing to be removed from the proposal because the customer didn’t build it into their budget for the project. “Easy-to-use” just isn’t usually true when it comes to technology. What may be “easy-to-use” for someone with a technical background is not necessarily so for the average end-user. And nothing will label a project a “failure” faster than end-users not embracing and using the new solution. Adequate communication and training is vital to the success of any UC or collaboration project when it will change how the end-user performs their daily job.
So good luck and remember….. you can never do too much planning or communicating on a new project!
Cloud-based unified communications services have transformed the ways businesses communicate. With unified communications–as-a-service solutions (UCaaS) enabling traditional communication and collaboration tools to be mobile and portable, business can have a seamless communication environment available anywhere, anytime. This new seamless communications environment has helped businesses become more nimble, more responsive to their customers and streamline their business processes.
With technology and service delivery evolving rapidly, it is important to note that not all unified communications solutions are equal. Pure hosted UCaaS solutions lack reliability as well as many of the enterprise features companies rely on that are available with traditional on-premise solutions. Hybrid solutions that combine the best of on-premise and hosted are emerging.
Hybrid cloud unified communications integrates cloud-based UC functionality with an on-premise device that supports advanced telephony capabilities primarily found in on-premise solutions. The multi-functional device, sometimes referred to as a service point, sits on premise and is a plug and play device similar to a set-top box. It is used to manage SIP devices such as desktop IP phones as well as distribute functionality. Most importantly, it is also used as an intelligent point for troubleshooting problems that originate on-premise.
Reliability is an ongoing concern with a pure hosted UCaaS solution and is typically end users most often heard complaint because business communications are the life blood of any SMB. Paul Faircloth, owner of Mosquito Creek Outfitters, a retail store providing quality outdoor gear and apparel to cater to the outdoor enthusiast lifestyle, sums it up best, “Before we deployed our hybrid cloud UCaaS solution we lost an average of $20,000 a day when the VoIP phone system was down, including irreparable damage to customer relations and reputation, which are priceless.”
The hybrid cloud UCaaS service point devices provide:
- Telemetry tools for end-to-end visibility and proactive management on both the LAN and WAN enabling the highest level of quality and reliability. Having visibility down to the device on the LAN virtually eliminates cross-vendor finger pointing if problems arise.
- Advanced call handling and features such as paging that are technically difficult to deliver reliably via cloud.
- Distributed communication services using SIP trunks from multiple geographically dispersed carriers to create a cohesive multi location solution with integrated numbering, voicemail, and presence.
- Additional fail over and redundancy capabilities including the ability to continue to use the phone system internally when external network connections are not available.
- SIP trunk termination without additional hardware. Organizations are choosing to run SIP trunks to the premise to consolidate traffic and to increase call quality, reliability, security and efficiency by maximizing the use of network capacity to drive down costs. With a service point, no additional hardware is needed for SIP trunk termination.
Epizyme recently replaced its hosted VoIP system with a hybrid UCaaS because calls were frequently dropped, words would fade in and out, and the problem could never be resolved. This is a common complaint with pure hosted solutions because there is no ability to assess the entire network – LAN, access connection, or upstream carrier for call quality and to diagnose the problem. This has resulted in finger pointing among the vendors and lingering issues that cannot be diagnosed and corrected.
The company selected a hybrid UCaaS solution and implemented SIP trunks to the premise. The company wanted to increase call quality, performance, simplify administration and future-proof by running SIP trunks to the premise so that they could take advantage of video and other new applications as they become available.
“Prior to installing a hybrid solution, we couldn’t rely on the phone service,” said Kevin Kaedin, Senior Systems Administration for Epizyme. “We wanted a VoIP solution that was reliable and easy to administer.”
Recently, the Dali Museum implemented a hybrid UCaaS solution to replace its 20+ year old legacy phone system. The museum is dedicated to increasing knowledge and awareness of Spanish surrealist painter Salvador Dali. The organization wanted to select a system that would improve customer service and provide worker flexibility with mobility features. They also have had problems with reliability and wanted to run SIP trunks directly to the premise.
The museum selected a hybrid UCaaS solution from a cable systems operator to provide a combined solution of voice and data services running over a fiber network. The hybrid UC solution’s service point is able to terminate SIP without any additional SIP to TDM conversion equipment further lowering TCO.
“We evaluated several large and small companies that either offered locally or remotely hosted systems before selecting a hybrid solution. The solution offered the best package in terms of cost, setup, equipment and service,” said Eric Crispen, Director of Information Technology for Dali Museum. “The lack of downtime and issues since we have installed the new has enabled us to feel secure in our telecom system, so we can solve other issues within the museum, saving us valuable time and man power.”
Take a close look at your UCaaS solution options before you buy. They are not all alike.
I observed an interesting debate on Twitter a couple weeks ago between an advocate of "enterprise" computing and an Amazon Web Services champion. After it went back and forth I bit, I offered my contribution: Somebody is using a ton of AWS, and it's growing like crazy.
Listening to this debate reminds me of the Men Are From Mars, Women Are From Venus discussion about how two people can discuss something and still fail to understand the other person's basic perspective. In the case of this Twitter debate, the discussion failed to address a key question: What are the requirements of the applications running in those environments?
The crucial fact is that those who defend enterprise computing fail to grasp the fact that legacy IT infrastructure and operations don't address the requirements of new application types that I label the "three M's"-mobile, media and marketing. These apps are flocking to public cloud computing because they're not well served by traditional infrastructure and are much more aligned with what cloud computing brings to the table.
It's critical to understand the characteristics of these applications to understand why demand for cloud computing is in its early growth phase-and why we're about to see its already rapid adoption accelerate even further.
Legacy enterprise applications could be tuned to a couple operating systems and a few browsers. They also had very predictable user populations and use patterns. The emphasis for these kinds of applications vis-à-vis infrastructure is to implement a static environment and make it difficult to modify.
Mobile applications, on the other hand, are a very different. They run on a bunch of different devices, which increases the combination of interfaces that applications need to be able to support. Moreover, companies often provide API interfaces to their applications to enable independent developers to create applications outside the purview of the company's own IT organization-the company won't even know what devices are going to be in the user population.
The growth of APIs is one of the real underreported stories of the past couple years, but it's huge and very much driven by a mobile application world. (The upshot of this is that mobile applications pose significant challenges to the design and operation of legacy applications.)
Public cloud computing environments, by contrast, are well-suited for the demands of mobile applications. High-load variability is easily handled by their very large infrastructures.
It's something of a misnomer these days to talk about a media company, since every company is becoming a media company. Video is becoming the sine qua non of how companies communicate with important stakeholders. And it's huge. Every year Cisco Systems comes out with five-year projections of Internet traffic, and every year the company ups them. The reason? Video.
Every company is leveraging video in one of the following ways-and this list is certainly not exhaustive:
· Marketing campaigns, particularly those with a clever or snarky twist, like the Will It Blend? series from Blend-Tec that has racked up 221 million views. Every company's fervent wish is that its marketing video will go viral and drive heightened consumer interest in its products.
· Partner, user or employee training. There's no faster way to demonstrate how to use a product than with a video. Plus, video is much more engaging than written documentation.
· Announcements. Video is a good way to make a company's announcements stand out from the blizzard of written press releases spewed onto the Internet every day.
Video is a huge consumer of bandwidth, and it's very sensitive to latency disruptions. The average company's internal network is insufficient to support the kind of traffic video requires, and the network capacity that is available is tuned to support legacy transactional application needs. When you marry mobile and video, it's obvious that legacy infrastructures are inadequate to support the requirements of these applications.
Now that marketing and advertising have shifted decisively to the Internet, their nature is changing as well. Because ad delivery used to be so difficult, marketing and advertising campaigns remained static. Rolling out a new TV ad across the U.S. required getting new tapes to multiple TV stations and cable/satellite providers. The process-especially making sure everyone had the right version of the ad-was so time-consuming that changes were relatively infrequent.
Today, by contrast, online marketing and advertising campaigns are served up centrally. This reduces the change overhead by more than 90 percent.
But guess what? Reduced friction encourages more change. In turn, that requires changes to both infrastructure and application code. In other words, it means a radically reduced application lifecycle-and that runs smack into legacy infrastructure managed to reduce change, with management controls such as ITIL imposing manual processes to control infrequent change.
The expectations of the next generation of marketing and advertising is that campaigns can be rolled out quickly, modified rapidly and terminated immediately. If you read my last post on cloud computing budgets, you know this kind of application is projected to be a majority of IT spend in 2017.
Consequently, the expectations of the majority of IT spending are going to confront the legacy practices and processes for enterprise infrastructure and applications. It's not going to be pretty. While many in the IT community have an unspoken wish that things will settle down and we'll go back to the old ways of doing things, that wish can pretty much be written off. These expectations aren't going to go away.
If anything, one can predict they will be even more strongly pushed as the possibilities of what can be done with online marketing become more embedded in the discipline. In five years, the everyday expectation will be marketing campaigns tuned daily-or even hourly-in according to real-time analytics performed on tracking data.
I expect the discussion to be over in five years. The definition of enterprise will have expanded to incorporate the requirements of the three M's, and the practices and processes of legacy IT will have been discarded as inadequate for the needs of the three M's. Mobile, media, and marketing will force as much change into IT as the PC did-and, as the change plays out, with just as much disruption.
Today communication with patients is fragmented at best. Family communications can only be described as dysfunctional. Recently, at one of the largest hospital systems in Maryland I tried to find out the date, time and location of an appointment for a relative. She was too doped up on pain meds to remember the details or where she put the reminder paperwork. It took a day and a half, 4 phone calls and was not resolved until I physically appeared at the doctor’s office. Who, by the way, had no recommendations for confirming appointments with other doctors other than the path that I had just taken. My concern is that I am certain that this goes on every days. It make you wonder how many of those people roaming the hallways of large hospitals are in the same predicament. Not just about appointments, but in search of information for themselves or their family. Maybe a question about a new medication that they forgot to ask during their meeting with their doctor. Maybe something they forgot to tell someone that might be relevant to their health. There are few of us that walk out of a doctor’s office that don’t think about something that they forgot while on their way to their next destination.
Many healthcare providers are considering centralizing their appointment process using 30 year-old contact center technology. This is admirable; however, it only solves one of the many communications problems that patients encounter every day. Coincidentally, I am aware of a Fortune 50 insurance company that is engaged in an effort to develop a patient collaboration interface. They have been trying to deliver this interface for years, but have been prevented from moving forward because of the high cost of computer-related support calls from patients and the high cost of proprietary software. With Web Real-Time Communications (WebRTC) these barriers are rapidly being overcome.
At this point it is worth exploring who is best served by building a patient collaboration interface. It seems to me that patient’s are the prime benefactors; however, I am sure that there is great debate related to who should host such a service…payers or providers? One of the game changing elements of WebRTC is that both can implement solutions that support the patients needs and patients can be supported collaboratively throughout the spectrum of the health care process. The applications will naturally dove-tail together to support the entire process and they can accomplish this without the need for technology or corporate federation.
WebRTC is a new standard that Google is supporting for browser-to-browser communications. It supports these communications without the need to download an app of plugin. It works on any smartphone, tablet or PC that can surf the web. WebRTC is transformational, but the big transformations will not be built by the technology providers or even start-ups, but by intelligent business people that can harness the capability to transform their business and gain significant competitive advantage. In the early days of the web/browser, the early adopters began using new techniques to reduce transactions cost were able to gain market share. Across all industries, the web changed the business models. The webification of telecommunications with WebRTC will create the same opportunity with richer interfaces that will extend well beyond traditional enterprise communications boundaries. Further, there will be over One Billion WebRTC enabled devices in use by the end of 2013 so the innovation wave has already started.
WebRTC-based collaboration interfaces are secured with end-to-end encryption that is superior to telephone communication. Access is restricted with user-name and password requirements. Initiation of communications and display of web content is secured with Secure HTTP (HTTPS). Transport of communications for file transfer, text, audio and video are encrypted with Secure Real-Time Transport Protocol. Currently, encryption of communications ends at the edge of the enterprise. WebRTC extends it all the way to the user’s browser.
The availability of customizable patient communications directories is the first element of a patient collaboration interface that will make a difference. These directories can include legacy 10 digit numbers and/or hyperlinks to communicate browser-to-browser or browser-to-telephone. This way doctors can always be available without the need to disclose their cell phone number. These directories can be systematically gleaned from the patient’s medical record or manually updated by staff , patients or family members. These directories are not limited to hospital employees. They can include ambulance services, physical therapists, claims adjusters, clergy or even contact centers that support things like managing appointments.
WebRTC supports screen sharing and file transfer from the browser on the patient’s device of choice. This means that test results, financial paperwork and/or images can be shared between patients, family, health workers and insurance professionals. Legacy collaboration application require technology and corporate federation in advance of sharing files or screens in order to traverse corporate boundaries. WebRTC does not. The ability to conduct more thorough communications is supported by the richness of the content that patients and healthcare professionals can share with others. One day a patient may collaborate with a financial professional about the completion of a government form and the next they may share a photo of a sore on their foot with a nurse.
In-home care is greatly enhanced by the real-time nature of these communications and the availability of inexpensive Bluetooth devices to monitor the health of the patient. Further, patient’s can be prompted to score their pain or comfort level on a systematic basis. Based on business rules, these events can be automatically escalated to a communications session, audio or video, in the event that intervention is necessary.
Big Data is being used to analyze the behavior of patients to determine risk and possible treatment options. In the past this has been done, but the results were often delayed by days, weeks or months. With the advent of Big Data these calculations can be made in real-time. Calls from patient’s to healthcare workers can be accompanied by statistical recommendations that are based on the web browsing history of the patient, their medical record, their current treatment and what web-page they were looking at when they decided to communicate.
Seamless integration with legacy telecommunications systems and wireless devices is available. Further, these systems can be configured in a duplicated architecture to support the fault-tolerant needs of the healthcare business.
While there are other benefits for payers and for internal communication within the healthcare community, patient collaboration is the real game changer. Within the next 12 months there will be several products that come to market to support patient collaboration. Pricing for these solutions will be an order of magnitude lower than current proprietary systems. The numbers will be more Magic Jack than AT&T. The question for healthcare providers and payers is not if, but whether to build a solution or contract with a cloud-based service provider.
According to a new study released by MarketsandMarkets, a global market research and consulting company based in the U.S., the market for the cloud version of UC (UCaas – unified communications as a service) is expected to grow from $2.52 billion in 2013 to $7.62 billion by 2018, at an estimated CAGR of 24.8% from 2013 to 2018. Telephony is the most used technology for now and will remain so in the next few years as well. The global UCaaS Telephony market is expected to grow from $0.87 billion in 2013 to $2.48 billion by 2018, at an estimated CAGR of 23.3% from 2013 to 2018. This is great news for channel partners offering UC solutions from the cloud.
Interestingly, the most significant growth comes from the collaboration area, reports the study. The UCaaS collaboration application market revenue is expected to grow from $540.74 million in 2013 to $1.75 billion by 2018, at an estimated CAGR of 26.5% from 2013 to 2018. Companies across all verticals are using UcaaS to integrate web conferencing, video conferencing, messaging, VoIP and presence. Use of cloud delivery and integration helps decrease front load capital cost as the applications are offered on a per seat basis, which enables businesses to scale communications easily and effectively, with the end results of reducing travel time and creating leaner business processes.
Most of the major UcaaS players identified in the report come as no surprise - Avaya, Cisco, Microsoft, Alcatel-Lucent, Interactive Intelligence, Siemens Enterprise Communications, Mitel, and NEC – although their inclusion in the list of Panterra Networks and CSC integrators did raise eyebrows. Perhaps reading the report will bring clarity on why the inclusion of these two organizations.
According to their press release, MarketsandMarkets have the report available for purchase at http://www.marketsandmarkets.com/Purchase/purchase_report1.asp?id=893
VARs, integrators and telecom dealers may not sell smartphones or provide the carrier services to make them work, but there are some amazing revenue opportunities that have been created by BYOD in the enterprise. And best of all, the majority of the opportunities are in the services area, which brings higher margins than product sales. According to the Gartner CIO Agenda 2012 study mobile technology and solutions are very high on the agenda of a majority of CIOs – higher than UC and collaboration.
Whether it’s implementing mobile UC for their end-users or addressing the challenges of BYOD in their own enterprise, the topic of mobility can be an excellent “conversation starter” when meeting with IT staffs or CIOs. And consider this, as BYOD continues to grow at the enterprise level, it should pull mobile UC along with it. For an employee who is now primarily communicating on his smartphone, how does a customer or another employee reach him effectively and efficiently? Mobile UC!
Where are those opportunities for the channel? Think creatively and strategically and you’ll find them!
Already offering a VoIP product that has mobile UC capabilities, either in a client environment or inherent in the VoIP product itself? Learn what that VoIP product can do with mobility and then visit your existing customer base and have a “mobility” discussion. Is there additional revenue available by adding mobile UC capabilities to their existing voice system? This could be a good source of easy incremental revenue.
The need for a solid mobile UC solution could also lead to a communication system upgrade or an entirely new system for an existing or new customer.
Along these same lines comes the potential for network and Wi-Fi assessments (professional services) as well as projects to upgrade network infrastructure to accommodate increased voice traffic or Wi-Fi for internal smartphone users.
For those in the channel who are services focused, consider developing a set of policies and procedures for managing BYOD in the enterprise. From the perspective of the customer’s IT department, controlling BYOD – to protect company data as well as control mobile spending – is a growing issue. Both existing and new customers could be candidates for this service that would not only provide high margins but be a competitive differentiator as well.
Are you an MSP? What about offering Managed Device Management (MDM). MDM software from companies like MobileIron is now readily available to secure and manage mobile applications, documents, and devices. As BYOD continues to grow across enterprises, MDM sales and services will grow also.
Thinking “outside the box”, security is one of the most serious concerns with BYOD. Company data is now residing on personal smartphones, which can be lost. Data residing on company servers is at risk of being hacked through those same personal smartphones. Companies that have already taken major steps to secure their information from internet intrusion are now finding it vulnerable via smartphones. Consider the industries for which security is vitally important (government and healthcare to name the most obvious). Develop expertise in this area and reach out not only to existing and new customers but to other channel partners that need to add “security expertise” to their portfolio but don’t have the training or knowledge to do it themselves
Historically, the carriers – AT&T, Verizon, Sprint, etc. – focused on consumer, personal smartphone sales. Today, with BYOD and mobile UC growing, they are actively engaged in finding ways to capture the growing business customer. The agent model is their immediate best bet to reach that customer and the agent relationship can provide a lucrative recurring revenue stream for little effort or financial commitment.
VARs/MSPs, integrators, and telecom dealers – don’t let these opportunities slip away. This is a relatively new area where customers are plentiful and competitors are few!
I hate talking about topics of the week, such as the debate around Yahoo's new CEO, Marissa Mayer, telling her staffers to stop working from home.
First, in my opinion, CEOs are allowed to make such statements to their employees, and you can't judge unless you work there or own stock. Second, it probably won't help Yahoo one bit.
However, what is relevant about this issue is the use of cloud computing by a remote workforce. What are those synergies? That's worth discussing.
The work-at-home movement drives a great deal of interest in cloud computing. Public cloud platforms are typically better at providing IT services over the open Internet than enterprise IT is capable of doing. Thus, the public cloud can better serve a workforce that's as likely to work at the local Starbucks as the corner conference room because they can push processing, storage, and enterprise applications to a middle tier between the company and the user. In other words, connectivity, security, capacity management, and resiliency become somebody else's problem.
Indeed, the more distributed your workforce, the more public cloud computing can benefit the support of that workforce. Innovative enterprises are adopting Dropbox or Box.net for file sharing services, taking up Google Apps for office automation and collaboration, accessing SaaS-based solutions such as Saleforce.com for CRM, and beginning to migrate large portions of operational data to public IaaS providers such as Amazon Web Services. If you add mobile computing and BYOD to the equation, the public cloud becomes even more compelling.
Of course, some companies push back on public cloud computing with the normal excuses, including security, privacy, ownership, and so on. But those busineses typically don't offer work-at-home options to their employees, I've found.
While a remote workforce issue is typically not the only benefit that drives business to the cloud, it's often on the radar. Moreover, companies innovative enough to create a strong remote workforce are typically the organizations that accept cloud computing. If they trust people to work poolside, then trusting public clouds is not much of a stretch.
Google recently launched its high-end Chromebook Pixel, and like previous Chromebooks this notebook computer makes a distinctly 21st Century assumption: that users' data, work and play belong mostly online, not on their own computers. Google isn't alone in pushing this notion, but it's the most powerful evangelist for the shift to what tech people call the "cloud" and away from "local" storage.
Call me unconvinced. Deeply unconvinced.
The cloud evangelists have an alluring pitch. First, they say, we can now count on being connected as much of the time as necessary. Second, these computing and data services becoming a utility like electricity – easier and safer to run from remote servers than on our local systems.
Like almost everyone else, I use lots of cloud services. They start with everything I do from a browser, such as search, microblogging (Twitter), multiuser games, etc. They also include my email (I store a few weeks' worth of messages in an online system that shows me the same inbox and folder structure no matter what computer I'm using) and calendars, but in those cases I'm synchronizing the data to the local machine. And I use several online sites to back up my music and important documents.
But move everything to the cloud, and use it in an on-demand way? No chance, at least not now – and probably not ever.
For one thing, web-based applications simply can't match the power and flexibility of native desktop software, at least not yet. Google Docs do many things well enough for non-complex tasks, but that's not good enough when I need, say, the track changes feature in Microsoft Word or its Linux equivalent, LibreOffice Writer. Online applications are getting better, and they can do some things the offline ones can't, of course; there are tradeoffs that over time will make the online offerings more compelling. And as Google and other web-based software companies make it possible to work offline – you can do that now with Google Docs – one more advantage of local computing will be mooted.
It's harder for me to imagine cloud computing ever being fully trustworthy. The idea that data is like electricity is only partly true. The electron that comes to me from the power grid is identical to the electron that goes to someone else. This isn't true for data, except at the most basic level, where all information can be reduced to zeros and ones. Put a bunch of electrons together and you still have just a bunch of electrons. Put a bunch of bits together in different orders, and they are completely different.
The promoters of the live-in-the-cloud vision tend to minimize the downsides. Online databases are vulnerable to hacking; hardly a day goes by anymore when we don't hear of yet another breach. Outages on networks or individual services are all too common. Centralized databases, owned and operated by big companies, are one-stop shops for government snoops.
One reason the cloud has become so useful is the same reason we should have a "local storage" backup as well: The companies that make disk drives and solid state storage (SSDs) keep improving their technologies, making storage cheaper and with vastly more capacity all the time. You can buy a portable hard disk with 2 terabytes (2 million megabytes) of storage for under $150. The micro-SD card, smaller than a fingernail, now holds 64GB for about $50; eventually it'll hold 2 terabytes at a comparable cost. In fact, the storage industry has outpaced everyone else in tech with its exponential improvements.
There are dangers in local storage, too. The chief one is disk failure. But other mishaps can occur, too, including physical loss of the backup. I made a terrible mistake last fall that cost me weeks of work on a project, because I bungled my backups. I was creating full and incremental backups to several external disk drives, rotating among them to ensure that nothing would be too old. But I made two crucial mistakes: I didn't back up several key folders to my normal online services, because I'd moved them on my laptop to a part of the drive where they were no longer automatically added to the online folders. Worse, I failed to rest the "restore" function of my backup software, which was encrypting the files; when I needed it most, it didn't work. I kicked myself for a couple of weeks, and moved on – with a different and (I believe) much safer routine.
I can't – and don't want to – avoid using the cloud for many tasks. But I won't solely rely on it for backups and working documents. My approach is to use both, and to encrypt my files in both places.
Giving further credence to the growing use of the iPad in the enterprise, ShoreTel recently announced the availability of two new releases of their mobility and collaboration products that have been optimized for the Apple iPad, enabling integrated collaboration capabilities, increased accessibility and improved communications, regardless of the user’s location. Can we deny that BYOD is here to stay?
ShoreTel Mobility 6 makes it easier to use mobility features on the enterprise users’ iPad. Imagine sitting in an airport and using your iPad to place calls that appear to be coming from your office desktop phone (your “business persona”).
ShoreTel Conferencing for iOS offers application collaboration capabilities. Users “easily share presentations controlled by their iPad or iPhone with remote participants; or can view shared desktops of their colleagues’ PC and Macs”, according to the press release.
In a quote from the press release, “The Apple iPad has quickly become the most popular tablet brought by users into the workplace,” said Peter Blackmore, chief executive officer, ShoreTel. “ShoreTel transforms the iPad into a true multi-modal business communications device – for placing and receiving calls just like a desk phone, for sending and responding to instant messages, and for easily collaborating with other PC, Mac and iPad users. By combining these applications together in a manner that is brilliantly simple for employees to use, businesses can feel comfortable supporting a BYOD policy to drive effective communications and enhance productivity.”
For those VARs, integrators, and telecom dealers who haven’t yet seen the opportunities that BYOD can bring to their business, this announcement should serve as a wake-up call.
Having wanted to attend Jeff Carr’s Suits and Spooks (SNS) event for a number of years, life offered me a touch of luck since his latest event landed in Arlington, VA – just a quick jaunt from our Iron Bow offices.
Over the years, I have not always agreed with Carr’s analyses, but that is neither here nor there because the infosec intelligence game has enough facets and variables that “the truth” often becomes immaterial since we often are only left with cinder, smoke and consequences.
Differences aside, there’s no denying Carr’s ability to put together an interesting and palpably visceral event which brings together actual thinkers and doers. Furthermore, it offers both the speakers AND the audience an academic style environment from which they can actually have bidirectional communication.
While standard briefings-as-information-dump-trucks have a certain value, I’m from the school of thought that a conversation offers tangible value. It’s the idea of having a conversation vs. being talked at. SNS is a real-time live forum, with break out panels, Twitter feeds and a limited audience to allow for coherent discussion. In the world of bloated industry events – it’s a breath of fresh air.
Here are a few of the discussions that stood out to me:
There seemed to be a general consensus amongst both the spooks and the suits: The security industry is at an impasse. Our technologies are limited and the threats surface of any entity is approaching infinity. From apps, to network, to third party partners to cloud providers and social media exposure – when a finite resource is pit against infinite odds, catastrophic failure seems immanent.
In fact, we’ve seen some interesting lapses from our own industry peers. There have been a number of vendor related technical fiascos over the past few years
An event in recent memory sparked a fair amount of discussion simply due to the fact they allegedly didn’t eat their own dog food and had their own key stores hacked such that their own product was signing/whitelisting malware. While such events aren’t at all shocking, sacred cows being turned into hamburger seems to be an ongoing theme. I expect many multi-vertical and multi-platform BBQs will ensue as 2013 rolls along.
All that being said, how many of us perform code reviews, QA or any real kind of analysis of the security products we depend on? What about the trusted cloud provider who is a repository for your data? The cloud based authentication system? If you aren’t testing the security and resilience of their products, why would they? Do they have the in-house resources? How many of us do? What assurances do you have regarding your provider’s SDLC or security posture? If you aren’t auditing your supply chain, why would you think they are performing appropriate due diligence?
Where things got interesting was how various individuals and organizations saw government and legal frameworks intersecting with the private sector. This situation is exacerbated given that state on state threats tends to imply a need to legislate at an international level. To some extent, these discussions are moot, as legislation tends to have minimal impact on those who do not feel bound by it – namely criminals and other state sponsored actors.
Some other thoughts triggered by these conversations:
· Our legal frameworks are woefully (years and years) behind the actual threats being faced by organizations.
· There needs to be a focus on threats to businesses rather than a purely risk and compliance based approach. This may offer some efficiencies in the coming years of decreased budgets.
· The infosec world is acknowledging that it is functioning in a world of smoke and mirrors; and attribution is a dream. Therefore, in the coming years internal and external intelligence programs will become increasingly critical for maintaining organizational operation state.
· And a fun quote by proxy, “Anonymous is God’s gift to the Chinese”…and the Russians, and the French and the Brazilians.
Overall, SNS is a well done conference and I will make certain that I attend the next one being held in La Jolla, CA this coming June 15-16, 2013. According to Carr, the next SNS will be focused on “exploring intersects between the U.S. Special Operations Forces community and the private Information Security community.” This should bring out at least a handful of interesting people from both the Suit and Spook side. Keep an eye open at Taia Global for details.