Over this week, I have been reading a lot about new companies creating new industries, markets, and services for the unmet needs IT organizations as they shift to the more and more mainstream paradigm of cloud computing. Gone are the days of a 2D IT department! It’s no longer just about Operations and Engineering. The challenges of managing infrastructure outside the corporate wall, processing big data, and growing threat of cyber attacks have created some cool opportunities – and in this post I want to highlight a few that caught my attention.
In the private data center model, Senior leadership of IT departments have been working to perfect their uptime as a core metric to support their reliability as a service. To ensure that issues are detected and resolved before the business feels the impact, tools grew to monitor adverse events in the infrastructure environment to encourage faster response to new incidents.
How do you monitor applications that live in the cloud, send data over networks your organization doesn’t own or have a lot of visibility to, and ultimately deliver the same clarity of fault isolation that leadership is used to seeing out of the traditional in-house model?
In this Forbes article, Boundary is described as: “Monitoring vendor Boundary is part of a new wave of companies that provide peripheral services to the new style of applications being built – applications that are distributed, have to deal with rapid scaling, bring together many different services and live in the cloud. Boundary’s goal is to give operations personnel a “single pane of glass” over all the different aspects of their application – it does so by delivering real-time operational data.”
Boundary’s niche is to provide clear visibility to all of the dimensions of a cloud application, particularly visibility to the latency between internal resources, 3rd party components, services, or API’s. They have several case studies from their existing customer base, which grew 400% this year, to speak to their business case.
This type of service will be necessary to provide corporate IT departments with the ownership and visibility to manage their applications, even if they don’t have eyes on the actual hardware anymore. Expect other entrants to this growing space.
Why is it that adding more data to a company doesn’t automatically lead to more value? This article discusses the difficulties in applying any insight from Big Data based on the inability to process it faster than it is received. While I think that this space will be riddled with different implementations to “make sense” of all that data, Adaptive Planning focuses on applying data quickly and efficiently to your business model.
The documented business model is a bear to keep up to date – every time new data is released, it needs to be updated. Since Leadership typically looks at simplified, rolled up models, changes to the model based on new data can take time to trickle up to the point where meaningful decisions can be made. Man, is it frustrating to have the data, but still have to wait days just to figure out what your company needs to do next in response!
Adaptive Planning builds your business model from the ground up in one model, that can drill up or down instantly. At the team level, the manager could model both the costs, activities, and predictions for what the team would be doing for the next year. The same would happen at the department, division, and company level.
In many industries, acting faster than competitors is the only road to success. Companies like this have been around for a while, but the ones that will break out and become market leaders will be one ones that can best integrate live BIG data sources to produce as close to a real time model as possible.
Security in the Cloud: Netskope and Skyhigh Networks
Much of the Cyber Security industry is based on pattern recognition, particularly anomalies that correlate to malware or malicious activity. Without full control of the infrastructure, how do we know which cloud services we can trust with our data? The two companies discussed in this article are focused on making enterprises happier to use cloud applications by giving them insights into what applications are being used within an organization, how they’re being used and automatically generating insights to help resolve problems before they arise.
Netskope is releasing its new product, Netskope Active, a solution that promised to deliver Data Loss Prevention (DLP), encryption of data and detection of unusual usage patterns. Netskope Active can identify all applications running in the enterprise regardless of whether users are on the network or remote, on a PC or mobile device. All applications are then analyzed by Netskope’s Cloud Confidence Index, which is a database of 3,000 apps that Netskope has classified according to their “enterprise readiness”. This classification has been built with criteria from the Cloud Security Alliance.
This product will fill a gap in the cloud economy by increasing visibility to the cloud apps already in use in your organization, as well as how they do at handling your data.
Skyhigh Networks released a similar program, called Skyhigh CloudTrust, which is an assessment of particular cloud service capabilities. The program assesses 50 different criteria including data protection, identity verification, service security, business practices, and legal protection to produce an enterprise readiness score. While this is something that companies may have been spending valuable business resources doing on their own, Skyhigh now gives them the ability to outsource their background checks. Since Skyhigh updates the ratings, they will provide more due diligence after you make the decision on a cloud vendor, long after you may stop looking.
Given the amount of SaaS options out there, there’s a distinct need to separate the cloud vendors that aren’t ready for prime time from the ones ready to take on your business. Who knows if this type of score will be as relevant 10 years from now – These companies could be the next Gartner, or Better Business Bureau for the cloud space.
How Do I Debug the Entire Internet?
As more and more SaaS products are deployed and as more employees are mobile or remote, the public Internet becomes a critical path. When a new issue arises, within a few minutes you must be able to determine answers to these questions: Is there a problem on our network? Our vendor’s network? How can we get around it?
Thousand Eyes is a system that can visualize and isolate network problems hop by hop, using Deep Network Analysis. The system works by creating beacons around the Internet that are constantly communicating, sending intelligent probes to characterize network behavior. This data provides evidence when something goes wrong, because communications between connections that are normally working suddenly stop.
This sort of knowledge is also vital to help troubleshoot problems for mobile users, deflect cyber attacks, or even just to visualize a complex network topology to look for inefficiencies. In this article, they even discuss potential downstream benefits, like cost savings by removing dedicated connections to branch offices, since troubleshooting over the public internet will now be more feasible.
It is my opinion that visualizing the internet is a much larger market than anyone realizes, and that because we have no real model for understanding what’s out there, tools like this could be combined with other services to start making the virtual world look more material.
There’s So Much Out There!
If you are interested in hearing about new companies, The writers at Forbes (technology section) are awesome, and I learned about these companies by reading their articles. Spread the word! Let me know if there are other companies making a splash.