In our last article, we talked about the advantages of taking a company’s Big Data deployment that might have started as an internal proof-of-concept or lab project and “productionising” it in the Cloud. And make no mistake, there are significant advantages to doing so, namely flexibility, extensibility, and risk-transition. We didn’t even mention the opportunity to shift costs from CAPEX to OPEX, which companies finding themselves in cash-strapped situations will often find preferable.
All that aside, we are not saying that there are no advantages to taking the home-brewed Big Data solution and productionising it in the company’s own data centre. The thoughtful BI architect planning a Big Data effort needs to understand these advantages, and evaluate them before making an effective decision about the deployment mode for their solution.
In-House Deployment Advantages
Despite the fact that many companies have shuttered their traditional data centres and moved their IT infrastructure and service-delivery architectures into contracted providers, other companies still have data centres on their campuses. These represent (in many cases) tens of millions of dollars of infrastructure investment; strategic assets that still have the ongoing capacity to deliver value.
For some, there are legacy hardware components such as mainframes and tape libraries that cannot simply be turned off, forklifted, and switched back on at a vendor’s location. These companies have no choice but to keep their data centre running. In cases such as these, putting a Big Data deployment in a company’s own data centre creates an opportunity to avoid significant hosting and data-transmission costs.
With a Big Data deployment entirely in-house, a company maintains a more significant degree of physical control over their data, often to the extent that raw data never even leaves the premises of the company that generated it. This can be important for some companies, whose cultural risk-aversion is high, or for whom the data stream includes highly sensitive or legally-regulated data that must be safeguarded. In such cases, it’s often more straightforward to provide a platform in which the company can control all the variables.
Finally, in-house execution strategies can provide a company with an opportunity to exercise end-to-end ownership of the solution, from data capture to analytics delivery. This can also include a variety of ancillary benefits, such as proximity of data-stream-generating systems, a greater degree of freedom and customisation as to how the deployment is conducted, and security against “vendor lock-in” concerns: the ugly risk associated with changing implementation partners in the future and the sometimes-intractable situations that can occur when vendors being migrated away from are no longer incentivised to cooperate.
While many BI architects are tending to lean increasingly toward Cloud solutions (and I mostly agree with them, at least generally), it’s worth thoughtfully considering both strategies in the context of your own company’s particular culture, infrastructure, and technology road-map, to make sure you’ll be designing a deployment plan that ends up suiting your organisation’s needs through the lifespan of the solution.
DataHub Writer: Douglas R. Briggs
Mr. Briggs has been active in the fields of Data Warehousing and Business Intelligence for the entirety of his 17-year career. He was responsible for the early adoption and promulgation of BI at one of the world’s largest consumer product companies and developed their initial BI competency centre. He has consulted with numerous other companies about effective BI practices. He holds a Master of Science degree in Computer Science from the University of Illinois at Urbana-Champaign and a Bachelor of Arts degree from Williams College (Mass)..
View Linkedin Profile->
Other Articles by Douglas->