This post showcases a comparison between different system architectures for Big Data.
Let's compare the architecture of the popular Big data system now, like Apache Hadoop Ecosystem, Google PowerDrill, IBM InfoSphere Streams and huawei FusionInsight.
Apache Hadoop is a collection of open-source software utilities that facilitate using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. Originally designed for computer clusters built from commodity hardware—still the common use—it has also found use on clusters of higher-end hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework.
The core of Apache Hadoop consists of a storage part, known as Hadoop Distributed File System (HDFS), and a processing part which is a MapReduce programming model. Hadoop splits files into large blocks and distributes them across nodes in a cluster. It then transfers packaged code into nodes to process the data in parallel. This approach takes advantage of data locality, where nodes manipulate the data they have access to. This allows the dataset to be processed faster and more efficiently than it would be in a more conventional supercomputer architecture that relies on a parallel file system where computation and data are distributed via high-speed networking.
![]()
PowerDrill is based on column storage, with memory computing technology, to achieve query performance of 10,000 cell data per second, which is 10-100 times the performance of traditional column storage.
Data memory occupies space optimization, and reduces the memory footprint by compressing and encoding technologies, which can reduce memory by 16 times.
![]()
The main design goals of InfoSphere Streams are to:
- Respond quickly to events and changing business conditions and requirements.
- Support continuous analysis of data at rates that are orders of magnitude greater than existing systems.
- Adapt rapidly to changing data forms and types.
- Manage high availability, heterogeneity, and distribution for the new stream paradigm.
- Provide security and information confidentiality for shared information.
![]()
Huawei’s Big Data Solution consists of two products: FusionInsight HD and FusionInsight LibrA. FusionInsight HD is a Hadoop enterprise edition containing many components: HDFS, Yarn, HBase, Spark, MapReduce, Flink, Storm, Elk, Solr, Kafka, Loader, Flume, and so on. FusionInsight LibrA is a massively parallel processing database that features elastic scalability, excellent performance, rock-solid reliability, and superior cost-effectiveness.
![]()