28 May 2018
Many things needs to be considered before finding the right hardware for hadoop clusters. Hadoop workloads tend to vary a lot betweek different jobs.It takes experience to correctly anticipate the amounts of storage, processing power, and inter-node communication that will be required for different kinds of jobs.Disk space, I/O Bandwidth (required by Hadoop), and computational power (required for the MapReduce processes) are the most important parameters for accurate hardware sizing.
The configuration for Hadoop Cluster depends upon the type of Workload Patterns.
Jobs are distributed equally across the various job types (CPU bound, Disk I/O bound, or Network I/O bound)
These workloads are CPU bound and are characterized by the need of large number of CPUs and large amounts of memory to store in-process data. (This usage pattern is typical for natural language processing or HPC(High Performance Computing) workloads like Clustering/Classification ,Complex text mining, Natural-language processing, Feature extraction)
Typical MapReduce job (like sorting, Indexing, Gouping, Data importing and exporting, Data movement and transformation) requires very little compute power but relies more on the I/O bound capacity of the cluster (for example if you have lot of cold data). Hadoop clusters utilized for such workloads are typically I/O intensive. For this type of workload, we recommend investing in more disks per box.
Most teams looking to build a Hadoop cluster are often unaware of their workload patterns. Also, the first jobs submitted to Hadoop are very different than the actual jobs in the production environments. For these reasons, Hortonworks recommends that you either use the Balanced workload configuration or invest in a pilot Hadoop cluster and plan to evolve as you analyze the workload patterns in your environment.