Hadoop Matters Blog

Big Industries' blog

Running Cloudera on Microsoft Azure

In recent years, the cloud has offered a scalable, versatile, and efficient environment for big data workloads. In this blogpost we explore how companies are deploying Cloudera Enterprise for running production Hadoop workloads - ETL, BI, advanced analytics, Spark - on Microsoft Azure, with enterprise-class data security and governance.

Read More →

Why Do I need a Data Lake?

What is a data lake?

A Data Lake is an enterprise-wide system for storing and analyzing disparate sources of data in their native formats. A Data Lake might combine sensor data, social media data, click-streams, location data, log files, and much more with traditional data from existing RDBMSes. The goal is to break the information silos in an enterprise by bringing all the data into a single place for analysis without the restrictions of schema, security, or authorization. Data Lakes are designed to store vast amounts of data, even petabytes, in local or cloud-based clusters consisting of commodity hardware.

Read More →