In this work, we have taken our first step by creating new SC (Smart Computing) Prototype based on the ARM architecture which also solves the current issues like complex problems on the large data sets. The proposed system way also support the task oriented system and it will focus on the data loads which are the main issue of the HPC and other Cloud based DFS systems.
This smart architecture works on the low powered ARM based chip (Raspberry pi) to reduce the power consumption, highly portable and it works along with the Hadoop Framework to support fault tolerance, data availability and easy computation in working with the large data set. For supporting those we are adopting various technical and technologies with revolutionary methods listed below.
We are living in the era where data is more important in deciding the future systems and its size grows every day. The scientists and other users who work on the complex problems perform daily operation with this large data set. The Industries around the world are facing tremendous pressure to deliver their product matching with their competitive brands.
For achieving those targets they are putting lot of efforts on improvising their methods and technology. In this dynamic environment they are in need to monitor and control those things in various manners. Finally they are placing various factories to identify and dig those quality and profit from various analytic methods. Which is strengthening by the things called integration and finding of cyber physical system.
This large data set creates huge load to the server. Even the super computer encounters difficulty when it involves large data set: especially if the data are distributed across the network. These problems step into to the creation of BigData to find the values of business from multiple dimensions and providing the platform for industrial informatics.
TABLE OF CONTENTS
LIST OF FIGURES
Preface
ACKNOWLEDGEMENT
CHAPTER 1 INTRODUCTION
INTRODUCTION
1.1 HPC SYSTEMS
1.2 CLUSTER CLOUD SYSTEM
1.3 INDUSTRY 4.0
1.4 IOT
1.5 BIG DATA
CHAPTER 2 LITERATURE REVIEW
2.1 INTRODUCTION
2.2 ADVANTAGES OF HPC CLOUD
2.3 CHALLENGES IN IMPLEMENTING HPC CLOUD
2.4 PROBLEM STATEMENT
2.5 CHALLENGES
CHAPTER 3 ARCHITECTURE
3.1 INTRODUCTION
3.2 HARDWARE LAYER
3.3 FRAMEWORK LAYER
3.4 APPLICATION LAYER
3.5 UNIQUE SENSE PROTOTYPE
3.6 HYBRID SOLUTION
3.6.1 ARM
3.6.2 HADOOP AND MAP REDUCE
CHAPTER 4 METHODOLOGY
4.1 POWER SOURCE
4.2 RASPBERRY – PI
4.2.1 PROCESSOR
4.2.2 PERFORMANCE
4.2.3 OVERCLOCKING
4.3 OPERATING SYSTEM
4.4 JVM
4.5 HDUSER
4.6 SSH SECURE SHELL
4.7 PERMISSION
4.8 PROCESSES INVOCATION
4.9 MODEL
4.10 IMPLEMENTATION
4.10.1 JAVA
4.10.2 HDUSER
4.10.3 HADOOP INSTALLATION
4.10.4 SSH CONFIGURATION
4.10.5 HDFS CREATION & RELATED PROCESS
4.10.6 PROCESSES INVOCATION
4.10.7 JPS
CHAPTER 5 EXPERIMENTAL RESULT
5.1 RESULT
CHAPTER 6 DISCUSSION
6.1 DISCUSSION
CHAPTER 7 CONCLUSION
7.1 CONCLUSION
REFERENCES
LIST OF FIGURES
illustration not visible in this excerpt
Preface
The Computing architectures are one of the most complex constrained developing areas in the research field. This delivers solution for different domains computation problem from its stack above. The architectural and integration constrains makes computing difficulties to customize and modify the dynamic business and research needs. As a result maintenance and customization become most expensive and challenging to keep Mainframe Computers and High Performance Computers to compute mid-level and light weight Parallel tasks. Here, this research initiation Unique sense: Smart computing prototype is a part of “UNIQUE SENSE” computing architecture delivers alternate solution for today’s computing architecture to satisfy the future generation needs of diversified technologies and techniques, which brings extended support to the ubiquitous environment. This smart computing prototype architecture is the light weight and compact, which is captivate varied requisite of this society. The proposed solution is based on the hybrid combination of cutting edge technologies and techniques from the various layers. The ultimate challenge of this system was to construct it at low cost and eco-friendly architecture and we cracked it in this proposed smart computing architecture.
ACKNOWLEDGEMENT
Authors grateful to the“6TH SENSE”An advanced research and scientific experiment foundation, for their notable technical support throughout the completion of the research work. We are very thankful to everyone who supported us for this project and gives their guidance to complete this book successfully and moreover on time.
CHAPTER 1 INTRODUCTION
INTRODUCTION
We are living in the era where data is more important in deciding the future systems and its size grows every day. The scientists and other users who work on the complex problems perform daily operation with this large data set. The Industries around the world are facing tremendous pressure to deliver their product matching with their competitive brands. For achieving those targets they are putting lot of efforts on improvising their methods and technology. In this dynamic environment they are in need to monitor and control those things in various manners. Finally they are placing various factories to identify and dig those quality and profit from various analytic methods. Which is strengthening by the things called integration and finding of cyber physical system. This large data set creates huge load to the server. Even the super computer encounters difficulty when it involves large data set: especially if the data are distributed across the network. These problems step into to the creation of BigData to find the values of business from multiple dimensions and providing the platform for industrial informatics.
In this work, we have taken our first step by creating new SC (Smart Computing) Prototype based on the ARM architecture which also solves the current issues like complex problems on the large data sets. The proposed system way also support the task oriented system and it will focus on the data loads which are the main issue of the HPC and other Cloud based DFS systems.
This smart architecture works on the low powered ARM based chip (Raspberry pi) to reduce the power consumption, highly portable and it works along with the Hadoop Framework to support fault tolerance, data availability and easy computation in working with the large data set. For supporting those we are adopting various technical and technologies with revolutionary methods listed below.
1.1 HPC SYSTEMS
High Performance Computing (HPC) is used for processing the complex data and it works in parallel on the distributed environments to produce effective and faster result. It is efficient in solving complex programs, Scientific, Weather forecasting systems, Medical systems, aerodynamic simulations, etc.
High-performance computing (HPC) is the use of parallel processing for running advanced application programs efficiently, reliably and quickly. The term applies especially to systems that function above a teraflop or 1012 floating-point operations per second. The term HPC is occasionally used as a synonym for supercomputing, although technically a supercomputer is a system that performs at or near the currently highest operational rate for computers. Some supercomputers work at more than a petaflop or 1015 floating-point operations per second.
HPC systems are divided into two main categories vise., Shared memory system where all disk share the same memory space with multi-core or multiple processor and Cluster systems where all the individual systems are connected to a common network and works for the common problem Task parallel HPC systems are primarily focuses on the scientific data set, complex problems and it gives secondary focus on the data loads.
The most common users of HPC systems are scientific researchers, engineers and academic institutions. Some government agencies, particularly the military, also rely on HPC for complex applications. High-performance systems often use custom-made components in addition to so-called commodity components. As demand for processing power and speed grows, HPC will likely interest businesses of all sizes, particularly for transaction processing and data warehouses. An occasional techno-fiends might use an HPC system to satisfy an exceptional desire for advanced technology.
1.2 CLUSTER CLOUD SYSTEM
Cluster System works on the cloud environments or shared networks, it serves to retrieve the static/ dynamic web pages, web services, and conventional distributed file system applications, where the data are taken as priority requirements. These systems are mainly designed to store the data effectively rather than executing on it.
Cloud architecture the systems architecture of the software systems involved in the delivery of cloud computing, typically involves multiple cloud components communicating with each other over a loose coupling mechanism such as a messaging queue. Elastic provision implies intelligence in the use of tight or loose coupling as applied to mechanisms such as these and others.
illustration not visible in this excerpt
1.3 INDUSTRY 4.0
Industry 4.0 revolution is the most comprehensively focusing towards cyber-physical Architecture by which achieving, components like sensors should be provide feature to sense behaviour like Self- Aware and Self- Predictive which leads to degradation monitoring and provide life prediction, which should lead us to production efficiency. Machine controller should aware, predict and compare which leads to maximum up time and predictive health monitoring. In that same aspect production system such as networked manufacturing system provide worry free productivity with its attributes self-configuration, Self-maintain, self-organize. Industry 4.0 it’s in the transformation from a manufacturing to a service business model.
1.4 IOT
The internet of things (IoT) is the network of physical devices, vehicles, buildings and other items—embedded with electronics, software, sensors, actuators, and network connectivity that enable these objects to collect and exchange data.
The IoT allows objects to be sensed and/or controlled remotely across existing network infrastructure,creating opportunities for more direct integration of the physical world into computer-based systems, and resulting in improved efficiency, accuracy and economic benefit. When IoT is augmented with sensors and actuators, the technology becomes an instance of the more general class of cyber-physical systems, which also encompasses technologies such as smart grids, smart homes, intelligent transportation and smart cities. Each thing is uniquely identifiable through its embedded computing system but is able to interoperate within the existing Internet infrastructure.
Internet of Things (IOT) can be identified in the three major dimensions from its orientation things, internet and schematic. Internet it’s also known as middle ware of this system. Internet of Things is sensor which sense and bring information to the system and with the help of the schematic or knowledge it does manipulation. These are the key factors behind IOT,which enables ubiquitous connection with interconnected devices to provide accessibility at anytime, anywhere and in any form. Which is creating a lot of challenges in various dimensions such as embedded communication between the systems achieves intercommunication with the sensors, actuators, etc. The Middleware technologies are facing a challenge to provide the platform for BigData analytics and its relevant issues, high cognition, etc. and more.
Integration with the Internet implies that devices will use an IP address as a unique identifier. However, due to the limited address space of IPv4 (which allows for 4.3 billion unique addresses), objects in the IoT will have to use IPv6 to accommodate the extremely large address space required. Objects in the IoT will not only be devices with sensory capabilities, but also provide actuation capabilities (e.g., bulbs or locks controlled over the Internet). To a large extent, the future of the internet of things will not be possible without the support of IPv6; and consequently the global adoption of IPv6 in the coming years will be critical for the successful development of the IoT in the future.
In an internet of things, the meaning of an event will not necessarily be based on a deterministic or syntactic model but would instead be based on the context of the event itself: this will also be a semantic web. Consequently, it will not necessarily need common standards that would not be able to address every context or use: some actors (services, components, avatars) will accordingly be self-referenced and, if ever needed, adaptive to existing common standards (predicting everything would be no more than defining a "global finality" for everything that is just not possible with any of the current top-down approaches and standardizations). Some researchers argue that sensor networks are the most essential components of the internet of things
1.5 BIG DATA
Big Data is the existing term which attracts society today with its extending features to finding business values. In this competitive Business world the term big-data, not only represents the volume of data, apart from various dimensions and nature of data it also includes data which the existing systems are unable to process or expensive to process those data. The types of data which the term Big Data includes are velocity, variety, verdict and other aspects.
Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time. Big data "size" is a constantly moving target, as of 2012 ranging from a few dozen terabytes to many petabytes of data. Big data requires a set of techniques and technologies with new forms of integration to reveal insights from datasets that are diverse, complex, and of a massive scale.
Big data can be described by the following characteristics:
Volume
The quantity of generated and stored data. The size of the data determines the value and potential insight- and whether it can actually be considered big data or not.
Variety
The type and nature of the data. This helps people who analyse it to effectively use the resulting insight.
Velocity
In this context, the speed at which the data is generated and processed to meet the demands and challenges that lie in the path of growth and development.
Variability
Inconsistency of the data set can hamper processes to handle and manage it.
Veracity
The quality of captured data can vary greatly, affecting accurate analysis.
CHAPTER 2 LITERATURE REVIEW
2.1 INTRODUCTION
HPC and other clustered systems data failure rate is high and many studies have been analysed to understanding of the characteristics of real failure to attain the knowledge of failure characteristics can be used in resource allocation to improve cluster availability.
Studies introduced HPC in cloud to solve traditional HPC issues and enterprise problems. Developing HPC in cloud bring challenges to the developers and vendors. Many researchers say that cloud can be used for running HPC applications when the computational power is sudden high and loosely coupled application. When the resource utilization is more than 30 it is better to use traditional HPC systems.
Due to high operational, construction, build cost and high power consumption traditional HPC vendors haven’t built large HPC clusters. For start-up companies running HPC application in cloud is better because it is easy for them to buy a cloud on demand basis.
There is less number of parallel programming tools available in the market and to improve the raw system performance we need to work on the ground level. To increase memory latency like a real system because memory access is the base for HPC benchmark systems and it requires random access to memory and so on.
The growth of the complex architecture of the processors and massive parallelization in all the system threads bring challenging environment to the developers. The programmers should focus on the hundreds of processors, memory banks and data exchange between these systems and complex calculations forces the developers to write complex logic. Most of the programmers rely on MPI message passing library and it requires low level logic and core commands to interact with processors that are connected to the vast networks. The programmers should choose suitable language that express parallelization at the high levels of abstraction.
2.2 ADVANTAGES OF HPC CLOUD
Instant availability – Cloud offers instant services as per the end user requirements based on the availability of resources.
Large capacity – Users can instantly increase the capacity of the system within the cloud.
Software choice – end Users can customize tier environment from the OS.
Visualized – Instances can be easily moved to and from similar clouds.
2.3 CHALLENGES IN IMPLEMENTING HPC CLOUD
Close to the hardware - past many years many man hours have been spent in writing HPC core libraries and application to work closely with hardware and it required to interact with OS drivers. But in the case of HPC cloud the organization that are using it they will have less control over the system core libraries.
User space communication – HPC user applications often need to bypass the OS kernel and communicate directly with remote user thread processes.
Tuned hardware – HPC hardware is often selected on the basis of communication, memory, and processor speed for a given application basis.
Tuned storage – HPC storage is often designed for a specific application set and user base according to the specific organization.
Batch scheduling – All HPC systems use a batch scheduler to share limited resources in the network.
Some applications can utilize highly parallel systems but do not require a high -performance interconnects or fast storage but in the cloud it will be uniform memory will be shared as per the user requirement.
Some system requires low latency and high interconnect or faster storage in these case cloud offeringswon’t be covering this under their scope.
Most of these inter connected HPC systems are user space. In This method the communication pathway is through wired and it bypasses OS kernel layer which are not possible in cloud. If high-performance networks are not available, many HPC applications run slowly and suffer from poor scalability.
In general IO sensitive information applications are runs slowly because of storage bottlenecks. To resolve this performance issue IO system works over parallel file system that drastically increases the I/O bandwidth of computing nodes. Parallel computing Single-Instruction Multiple Data processors are not supported by cloud.
2.4 PROBLEM STATEMENT
Based on the recent study the HPC systems facing number of limitations and the solutions are being applied in the industries or work environment based on the requirement. Here we address few limitations of HPC systems, To begin with the power consumption, generally power consumed in HPC system are huge when compared to cluster system it is due to the maximum utilization of hardware when it tends to process the complex operation. Infra Maintenance for the HPC System would be difficult because the heat generated by the multi-core processor is very high and it requires ultra-cooling fans and very good infra support to maintain the system temperature. HPC System is huge and it cannot be taken anywhere and installed to resolve the high complex problems.
Memory needs to be increased in order to accommodate the increased data size and it results in the installation of more multi-core processor but it end up in performance degradation in the I/O transmission when the messages are getting transferred. HPC systems are designed in such a way where the user has more responsibilities to manage the HPC system.
On the other part we try to address some of the issues found in the cloud systems, these systems serves for web pages, web services and other applications in the internet. Cluster Systems are not designed to work effectively on high complex problem. Cluster System work on the basis of scattered memory so the files are split in to the various systems and for complex problem the data need to be process in parallel on the multi-processor which in turn increases the rate of data exchange. The cost setup to maintain the common network is quite high and it cost a lot to exchange the data’s for analyzing the problems to the required systems.
The data exchange rate is high in Industrial robots because it commonly operates in the master system. PLCs plays vital role in the controlling part with its static nature in the dynamic procedures. But Industry 4.0 needs a sensible work flow needs aware as well as need to react for sensor information. But the transformation and interconnection between the systems based on various platforms are split with various data base and management techniques as a result we failed to make decision based on the data generated by the sub systems. As a result we are aware about target in terms of success and failures but the industry 4.0 needs seamless quality with efficient consumption. For providing the feature to achieving those we are facing a lot of issue in placing the higher end machine in distributed location make complex decision to integrate and making decision as well as cost of that machine also high. It creates issue in integrating and managing unstructured and scalable data wile finding the solution on it.
Like the system need to be respond for the data as well as need to perform some special task. While the system failed. It should lead to entire chained process failure. And the special hardware needs to be placed for data conversion and to establishing the communication with the sensors and communicators. For those we are forced to implement a lot of different mechanism and machines to be placed to make everything complex for easy operation. As a result we are getting unique machines with expensive operations in the entire dimension.
2.5 CHALLENGES
Identify the light weighted and low power consuming architecture that supports Hadoop installation and it should meet basic hardware requirements. The System should bring high availability, durability, fault tolerance, supports variety and high data loads. Need to identify whether the chosen architecture supports clustering and utilize maximum resources to contribute in improving the system performance. The system should utilize minimum amount of instructions set that can be handled by the single board computer. Identify the source of power and power consumption with the heat resistance.
CHAPTER 3 ARCHITECTURE
3.1 INTRODUCTION
illustration not visible in this excerpt
Fig1. Layer Architecture
This architecture consists of three layers hardware layer (ARM), framework layer (middle layer Hadoop & Map reduce), and application layer (user view).
3.2 HARDWARE LAYER
In Fig 1 the hardware layer we have used is ARM architecture it is designed using the reduced instruction set computing (RISC) and having the less number of transistor that makes the chip to consume less power , it controls the emission of heat and efficient in processing the complex data. It is best suited for Java based environment where the Hadoop kind of distributed file systems runs and it supports to solve real world complex problems. It helps the framework layer to create individual threads for each of the operations and it supports in all the ways by allowing the framework layer to work independently in the portable environment.
3.3 FRAMEWORK LAYER
In Fig 1 the framework layer we have chosen Hadoop framework which works effectively on the distributed environments irrespective of the hardware design. It will compute the data in parallel using map reduce framework and it will split the data to store it in the different systems for supporting fault tolerance and to improve the data availability. The recent studies say that the Hadoop is the best framework for supporting large volume of data.
The Map reduce frameworks supports parallel computing process (what HPC does), it effectively work on the complex data over the large data sets based on the user logic. It has two steps map and reduce. The Map step will split and arrange all the data according to the user logic and the reduce step will combined the result provided by the map step. Here the data works on the parallel environments by using individual threads which works on the separate hardware layer that are distributed over the network.
3.4 APPLICATION LAYER
In Fig 1 the application layer the user can see the results of the output generated using the framework layer. The user needs to configure the paths and number of replications which needs to be maintained in the HDFS. Here the user responsibilities are very less when compare to the HPC system. For Map reduce framework the user need to code the file splitting logic based on the business needs so that the map reduce framework will take care of the splitting and combining operations.
3.5 UNIQUE SENSE PROTOTYPE
illustration not visible in this excerpt
Fig 2. Smart Computing Architecture
- M1 - Map Algorithm for sorting the input
- S1 - Shuffle, Partition/sort per map output
The Unique Sense Prototype is the hybrid combination of Hadoop on ARM architecture. The procedural way to combine these two streams for creating the model is given below.
3.6 HYBRID SOLUTION
3.6.1 ARM
illustration not visible in this excerpt
ARM, originally Acorn RISC Machine, later Advanced RISC Machine, is a family of reduced instruction set computing (RISC) architectures for computer processors, configured for various environments. A RISC-based computer design approach means processors require fewer transistors than typical complex instruction set computing (CISC) x86 processors in most personal computers. This approach reduces costs, heat and power use. Such reductions are desirable traits for light, portable, battery-powered devices—including smartphones, laptops and tablet computers, and other embedded systems. For Supercomputers, which consume large amounts of electricity, ARM could also be a power-efficient solution.
ARM is also an instruction set architectures used by processors depend on RISC architecture. It represent three cortex profiling for Application, Real-time, Microcontroller known as Cortex A, Cortex R, and Cortex M. Mainly Raspberry pi comes with ARM1176JZ-F undefined series but most of the properties are as same as ARM 11 which is 32 bit ARM architecture, ARMv6 Architecture core. Especially those architectures are emitting reduced heat when compare with previous models and lower heat risk and most compactable for real time process. Because most of the mobile phones are using this architecture. Especially Series 1176 having security extensions.
The 32-bit ARM architecture, such as ARMv7-A, is the most widely used architecture in mobile devices.The architecture has evolved over time, and version seven of the architecture, ARMv7, defines three architecture "profiles":
- A-profile, the "Application" profile, implemented by 32-bit cores in the Cortex-A series and by some non-ARM cores;
- R-profile, the "Real-time" profile, implemented by cores in the Cortex-R series
- M-profile, the "Microcontroller" profile, implemented by most cores in the Cortex-M series.
3.6.1.1 CPU MODES
Except in the M-profile, the 32-bit ARM architecture specifies several CPU modes, depending on the implemented architecture features. At any moment in time, the CPU can be in only one mode, but it can switch modes due to external events (interrupts) or programmatically.
- User mode: The only non-privileged mode.
- FIQ mode: A privileged mode that is entered whenever the processor accepts an FIQ interrupt.
- IRQ mode: A privileged mode that is entered whenever the processor accepts an IRQ interrupt.
- Supervisor (svc) mode: A privileged mode entered whenever the CPU is reset or when an SVC instruction is executed.
- Abort mode: A privileged mode that is entered whenever a pre-fetch abort or data abort exception occurs.
- Undefined mode: A privileged mode that is entered whenever an undefined instruction exception occurs.
- System mode (ARMv4 and above): The only privileged mode that is not entered by an exception. It can only be entered by executing an instruction that explicitly writes to the mode bits of the CPSR.
- Monitor mode (ARMv6 and ARMv7 Security Extensions, ARMv8 EL3): A monitor mode is introduced to support TrustZone extension in ARM cores.
- Hyp mode (ARMv7 Virtualization Extensions, ARMv8 EL2): A hypervisor mode that supports Popek and Goldberg virtualization requirements for the non-secure operation of the CPU.
- Thread mode (ARMv6-M, ARMv7-M, ARMv8-M): A mode which can be specified as either privileged or unprivileged, while whether Main Stack Pointer (MSP) or Process Stack Pointer (PSP) is used can also be specified in CONTROL register with privileged access. This mode is designed for user tasks in RTOS environment but it's typically used in bare-metal for super-loop.
- Handler mode (ARMv6-M, ARMv7-M, ARMv8-M): A mode dedicated for exception handling (except the RESET which are handled in Thread mode). Handler mode always uses MSP and works in privileged level.
3.6.1.2 INSTRUCTION SET
The original (and subsequent) ARM implementation was hardwired without microcode, like the much simpler 8-bit 6502 processor used in prior Acorn microcomputers.
The 32-bit ARM architecture (and the 64-bit architecture for the most part) includes the following RISC features:
- Load/store architecture.
- No support for unaligned memory accesses in the original version of the architecture. ARMv6 and later, except some microcontroller versions, support unaligned accesses for half-word and single-word load/store instructions with some limitations, such as no guaranteed atomicity.
- Uniform 16× 32-bit register file (including the program counter, stack pointer and the link register).
- Fixed instruction width of 32 bits to ease decoding and pipelining, at the cost of decreased code density. Later, the Thumb instruction set added 16-bit instructions and increased code density.
- Mostly single clock-cycle execution.
To compensate for the simpler design, some additional design features were used:
- Conditional execution of most instructions reduces branch overhead and compensates for the lack of a branch predictor.
- Arithmetic instructions alter condition codes only when desired.
- 32-bit barrel shifter can be used without performance penalty with most arithmetic instructions and address calculations.
- Has powerful indexed addressing modes.
- A link register supports fast leaf function calls.
- A simple, but fast, 2-priority-level interrupt subsystem has switched register banks.
3.6.2 HADOOP AND MAPREDUCE
Hadoop is an open-source software framework for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common and should be automatically handled by the framework.
The core of Hadoop consists of a storage part, known as Hadoop Distributed File System (HDFS), and a processing part called MapReduce .
The Hadoop distributed file system (HDFS) is a distributed, scalable, and portable file system.HDFS stores large files (typically in the range of gigabytes to terabytes) across multiple machines. It achieves reliability by replicating the data across multiple hosts, and hence theoretically does not require RAID storage on hosts (but to increase I/O performance some RAID configurations are still useful). With the default replication value, 3, data is stored on three nodes: two on the same rack, and one on a different rack.
The HDFS file system includes a so-called secondary namenode, a misleading name that some might incorrectly interpret as a backup namenode for when the primary namenode goes offline. In fact, the secondary namenode regularly connects with the primary namenode and builds snapshots of the primary namenode's directory information, which the system then saves to local or remote directories. These checkpointed images can be used to restart a failed primary namenode without having to replay the entire journal of file-system actions, then to edit the log to create an up-to-date directory structure. Because the namenode is the single point for storage and management of metadata, it can become a bottleneck for supporting a huge number of files, especially a large number of small files. HDFS Federation, a new addition, aims to tackle this problem to a certain extent by allowing multiple namespaces served by separate namenodes. Moreover, there are some issues in HDFS, namely, small file issue, scalability problem, Single Point of Failure (SPoF), and bottleneck in huge metadata request. An advantage of using HDFS is data awareness between the job tracker and task tracker. The job tracker schedules map or reduce jobs to task trackers with an awareness of the data location. For example: if node A contains data (x,y,z) and node B contains data (a,b,c), the job tracker schedules node B to perform map or reduce tasks on (a,b,c) and node A would be scheduled to perform map or reduce tasks on (x,y,z). This reduces the amount of traffic that goes over the network and prevents unnecessary data transfer. When Hadoop is used with other file systems, this advantage is not always available. This can have a significant impact on job-completion times, which has been demonstrated when running data-intensive jobs.
HDFS was designed for mostly immutable filesand may not be suitable for systems requiring concurrent write-operations.
HDFS can be mounted directly with a Filesystem in Userspace (FUSE) virtual file system on Linux and some other Unix systems.
Hadoop is a platform that provides both distributed storage and computational capabilities. It brings support in two dimensions viz., HDFS for storage and map reduce for computational capabilities.
The base Hadoop framework is composed of the following modules:
- Hadoop Common – contains libraries and utilities needed by other Hadoop modules;
- Hadoop Distributed File System (HDFS) – a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster;
- Hadoop YARN – a resource-management platform responsible for managing computing resources in clusters and using them for scheduling of users' applications; and
- Hadoop MapReduce – an implementation of the MapReduce programming model for large scale data processing.
MapReduce is a Programming model and an associated implementation for processing and generating large data sets. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google’s clusters every day, processing a total of more than twenty petabytes of data per day.
A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.
Typically the compute nodes and the storage nodes are the same, that is, the MapReduce framework and the Hadoop Distributed File System (see HDFS Architecture Guide) are running on the same set of nodes. This configuration allows the framework to effectively schedule tasks on the nodes where data is already present, resulting in very high aggregate bandwidth across the cluster.
The MapReduce framework consists of a single master JobTracker and one slave TaskTracker per cluster-node. The master is responsible for scheduling the jobs' component tasks on the slaves, monitoring them and re-executing the failed tasks. The slaves execute the tasks as directed by the master.
Minimally, applications specify the input/output locations and supply map and reduce functions via implementations of appropriate interfaces and/or abstract-classes. These, and other job parameters, comprise the job configuration. The Hadoop job client then submits the job (jar/executable etc.) and configuration to the JobTracker which then assumes the responsibility of distributing the software/configuration to the slaves, scheduling tasks and monitoring them, providing status and diagnostic information to the job-client.
Although the Hadoop framework is implemented in JavaTM, MapReduce applications need not be written in Java.
- Hadoop Streaming is a utility which allows users to create and run jobs with any executables (e.g. shell utilities) as the mapper and/or the reducer.
- Hadoop Pipes is a SWIG- compatible C++ API to implement MapReduce applications (non JNITM based).
MapReduce is a framework for processing parallelizable problems across large datasets using a large number of computers (nodes), collectively referred to as a cluster (if all nodes are on the same local network and use similar hardware) or a grid (if the nodes are shared across geographically and administratively distributed systems, and use more heterogenous hardware). Processing can occur on data stored either in a filesystem (unstructured) or in a database (structured). MapReduce can take advantage of the locality of data, processing it near the place it is stored in order to reduce the distance over which it must be transmitted.
- "Map" step: Each worker node applies the "map()" function to the local data, and writes the output to a temporary storage. A master node ensures that only one copy of redundant input data is processed.
- "Shuffle" step: Worker nodes redistribute data based on the output keys (produced by the "map()" function), such that all data belonging to one key is located on the same worker node.
- "Reduce" step: Worker nodes now process each group of output data, per key, in parallel.
MapReduce allows for distributed processing of the map and reduction operations. Provided that each mapping operation is independent of the others, all maps can be performed in parallel – though in practice this is limited by the number of independent data sources and/or the number of CPUs near each source. Similarly, a set of 'reducers' can perform the reduction phase, provided that all outputs of the map operation that share the same key are presented to the same reducer at the same time, or that the reduction function is associative.The parallelism also offers some possibility of recovering from partial failure of servers or storage during the operation: if one mapper or reducer fails, the work can be rescheduled – assuming the input data is still available.
Another way to look at MapReduce is as a 5-step parallel and distributed computation:
1. Prepare the Map() input – the "MapReduce system" designates Map processors, assigns the input key value K1 that each processor would work on, and provides that processor with all the input data associated with that key value.
2. Run the user-provided Map() code – Map() is run exactly once for each K1 key value, generating output organized by key values K2.
3. "Shuffle" the Map output to the Reduce processors – the MapReduce system designates Reduce processors, assigns the K2 key value each processor should work on, and provides that processor with all the Map-generated data associated with that key value.
4. Run the user-provided Reduce() code – Reduce() is run exactly once for each K2 key value produced by the Map step.
5. Produce the final output – the MapReduce system collects all the Reduce output, and sorts it by K2 to produce the final outcome.
These five steps can be logically thought of as running in sequence – each step starts only after the previous step is completed – although in practice they can be interleaved as long as the final result is not affected.
CHAPTER 4 METHODOLOGY
4.1 POWER SOURCE
There are two possible way to provide power source for our system. In this architecture we have chosen micro USB instead of GIPO for achieving quick stability based on available resource having capability for providing power to I/O components. 2A - 5 V is the power factor meets our requirement.
4.2 RASPBERRY – PI
Raspberry Pi Foundation is an educational charity situated in UK which has a motto in finding of advance education system and technology for society. As their contribution they developed Credit card size light weight processing computer called Raspberry Pi.
The Raspberry Pi hardware has evolved through several versions that feature variations in memory capacity and peripheral-device support.
This block diagram depicts models A, B, A+, and B+. Model A,A+, and Zero lack the Ethernet and USB hub components. The Ethernet adapter is internally connected to an additional USB port. In model A,A+, and Zero the USB port is connected directly to the system on a chip (SoC). On model B+ and later models the USB/Ethernet chip contains a five-point USB hub, of which four ports are available, while model B only provides two. On the model Zero, the USB port is also connected directly to the SoC, but it uses a micro USB (OTG) port.
4.2.1 PROCESSOR
The SoC used in the first generation Raspberry Pi is somewhat equivalent to the chip used in older smartphones (such as iPhone, 3G, 3GS). The Raspberry Pi is based on the Broadcom BCM2835 SoC, which includes an 700 MHz ARM1176JZF-S processor, VideoCore IV graphics processing unit (GPU), and RAM. It has a Level 1 cache of 16 KB and a Level 2 cache of 128 KB. The Level 2 cache is used primarily by the GPU. The SoC is stacked underneath the RAM chip, so only its edge is visible.
4.2.2 PERFORMANCE
While operating at 700 MHz by default, the first generation Raspberry Pi provided a real-world performance roughly equivalent to 0.041 GFLOPS. On the CPU level the performance is similar to a 300 MHz Pentium II.
4.2.3 OVERCLOCKING
The first generation Raspberry Pi chip operated at 700 MHz by default, and did not become hot enough to need a heat sink or special cooling unless the chip was overclocked. The second generation runs at 900 MHz by default; it also does not become hot enough to need a heatsink or special cooling, although overclocking may heat up the SoC more than usual.
Most Raspberry Pi chips could be overclocked to 800 MHz, and some to 1000 MHz. There are reports the second generation can be similarly overclocked, in extreme cases, even to 1500 MHz (discarding all safety features and over-voltage limitations).
Table1. Technical information
illustration not visible in this excerpt
4.3 OPERATING SYSTEM
Code named wheezy is the one of the stable version from Debian, Linux distribution. With the future of multi-arch which supports 32 bit runs on 64 bit operating system and its feature extends to support arm. So here in this work we choose it as a one of the supporting system for Hadoop in ARM architecture. Therefore, we utilized Rasbian, Debian wheezy linux operating system Kernel Version 3.12, and released date 9 September 2014 from the Raspberry supporting site.
4.4 JVM
ARM architecture supports Java environment and we need JVM for installing Hadoop framework. So we have installed java version "1.7.0_07" on the Raspberry Pi.
The Java virtual machine is an abstract (virtual) computer defined by a specification. This specification omits implementation details that are not essential to ensure interoperability: the memory layout of run-time data areas, the garbage-collection algorithm used, and any internal optimization of the Java virtual machine instructions (their translation into machine code). The main reason for this omission is to not unnecessarily constrain implementers. Any Java application can be run only inside some concrete implementation of the abstract specification of the Java virtual machine.
One of the organizational units of JVM byte code is a class. A class loader implementation must be able to recognize and load anything that conforms to the Java class file format. Any implementation is free to recognize other binary forms besides class files, but it must recognize class files.
The class loader performs three basic activities in this strict order:
1. Loading: finds and imports the binary data for a type
2. Linking: performs verification, preparation, and (optionally) resolution
- Verification: ensures the correctness of the imported type
- Preparation: allocates memory for class variables and initializing the memory to default values
- Resolution: transforms symbolic references from the type into direct references.
3. Initialization: invokes Java code that initializes class variables to their proper starting values.
In general, there are two types of class loader: bootstrap class loader and user defined class loader.
Every Java virtual machine implementation must have a bootstrap class loader, capable of loading trusted classes. The Java virtual machine specification doesn't specify how a class loader should locate classes.
The JVM verifies all bytecode before it is executed. This verification consists primarily of three types of checks:
- Branches are always to valid locations
- Data is always initialized and references are always type-safe
- Access to private or package private data and methods is rigidly controlled
4.5 HDUSER
We have added sudo user having rights to install applications and that user will later added into Hadoop group to access the file system.
4.6 SSH SECURE SHELL
We have used SSH secure shell which are widely used protocol to connect remote system. After Creating SSH key share that key with user to establish communication between its nodes.
4.7 PERMISSION
The common type of permission where given to the users so that users can possibly process, read, write and execute (Traverse for directories).
4.8 PROCESSES INVOCATION
After installing the required component in Linux, most commonly we need to start the process manually.
4.9 MODEL
illustration not visible in this excerpt
L – Is the Layer which can denote the physical Assembly line robots or assembly Line master system or the collection of units, which is capable to process and collect information from that line robot. Which can have sensors and collection of information providing system like data lines such as parallel and sequential cables which can transfer signals and data for processing and retrieving information from the system process in commandment of single PLC or numerous PLCs. Now a day industry 4.0 providing vision to deliver seamless self-aware cyber physical system capable to do work smarter. For that we are focusing various static solutions.
S- Layer are smart computing layer it can be placed in the physical systems in dynamic environment may be a unique or cluster of system based on the industrial need. Which is light weight compact low power consumption model can be work similar like computer but based on the smart ARM architecture. Those where well known for 24*7 operation like our mobile phone. From that our unique sense architecture can primarily initiating towards assuring distributed data system and parallel processing across them based on the industrial need such as Big Data initiation. It can cable to work as individual and work to gather to achieve major tasks. And those where considered in the S layer may have the cluster of networked smart computing units. Those layer can capable to satisfy the IOT properties and efficient for multi operation.
4.10 IMPLEMENTATION
4.10.1 JAVA
Following commands is lead to java installation on the Raspberry Pi.
pi@raspberrypi ~ $ sudo apt-get install openjdk-7-jdk
pi@raspberrypi ~ $ java -version
java version "1.7.0_07"
OpenJDK Runtime Environment (IcedTea7 2.3.2) (7u7-2.3.2a-1+rpi1)
OpenJDK Zero VM (build 22.0-b10, mixed mode)
4.10.2 HDUSER
The following creates new user for Hadoop because for avoiding file system collision. It’s also sudo user having rights to install applications and that user will later added into Hadoop group to access it file system.
pi@raspberrypi ~ $ sudo addgroup hadoop
pi@raspberrypi ~ $ sudo adduser --ingroup hadoop hduser
pi@raspberrypi ~ $ sudo adduser hduser sudo
4.10.3 HADOOP INSTALLATION
hduser@raspberrypi ~ $ hadoop version
Hadoop 1.1.2
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782
4.10.4 SSH CONFIGURATION
Starting SSH Localhost.
hduser@raspberrypi ~ $ ssh localhost
Linux raspberrypi 3.12.28+ #709 PREEMPT Mon Sep 8 15:28:00 BST 2014 armv6l
4.10.5 HDFS CREATION & RELATED PROCESS
Those following comments are executed in the manner to create directory after that we are formatting that directory with HDFS file system. Most commonly it’s apart from the Linux file system for data protection concern we are creating new user to avoid some collision with regular Linux file system.
hduser@raspberrypi ~ $ sudo mkdir -p /fs/hadoop/tmp
This command brings ownership permission to hduser and its group for doing process on mentioned directory
hduser@raspberrypi ~ $ sudo chown hduser:hadoop /fs/hadoop/tmp
This step brings privilege to the user such that 750 is the common type of permission where users can possibly process, read, write and execute (Traverse for directories). It limits the group users for doing the operations only read, execute and denies write operation. It can also avoid data writing violations from other intrusions.
hduser@raspberrypi ~ $ sudo chmod 750 /fs/hadoop/tmp
hduser@raspberrypi ~ $ hadoop namenode –format
hadoop namenode -formate this command formating your file system at the location specified in
hdfs-site.xml
for example: hear my name node directory is /usr/local/hadoop/dfs/name
<property>
<name>dfs.name.dir</name>
<value>/usr/local/hadoop/dfs/name</value>
<final>true</final>
</property>
4.10.6 PROCESSES INVOCATION
Here the start-all.sh starts the required components of Hadoop such as name node, data node, secondary name node, job tracker and task tracker.
hduser@raspberrypi ~ $ start-all.sh
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-namenode-raspberrypi.out
localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-datanode-raspberrypi.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-secondarynamenode-raspberrypi.out
starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-jobtracker-raspberrypi.out
localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-tasktracker-raspberrypi.out
4.10.7 JPS
The jps tool lists the instrumented HotSpot Java Virtual Machines (JVMs) on the target system. The tool is limited to reporting information on JVMs for which it has the access permissions. The numeric value represented before the instrumented JVM are its identification number.
Listing the instrumented JVMs on the local host:
hduser@raspberrypi ~ $ jps
3051 - Jps
2612 - NameNode
2816 - SecondaryNameNode
2710 - DataNode
2999 - TaskTracker
2892 – JobTracker
CHAPTER 5 EXPERIMENTAL RESULT
5.1 RESULT
hduser@raspberrypi /usr/local/hadoop $ hadoop jar hadoop-examples-1.1.2.jar pi 5 50
Number of Maps = 5
Samples per Map = 50
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
14/12/13 11:48:09 INFO mapred.FileInputFormat: Total input paths to process : 5
14/12/13 11:48:21 INFO mapred.JobClient: Running job: job_201412131131_0001
14/12/13 11:48:22 INFO mapred.JobClient: map 0% reduce 0%
14/12/13 11:52:33 INFO mapred.JobClient: map 20% reduce 0%
14/12/13 11:55:36 INFO mapred.JobClient: map 40% reduce 0%
14/12/13 11:55:49 INFO mapred.JobClient: map 40% reduce 13%
14/12/13 11:57:43 INFO mapred.JobClient: map 60% reduce 13%
14/12/13 11:57:55 INFO mapred.JobClient: map 60% reduce 20%
14/12/13 11:58:04 INFO mapred.JobClient: map 80% reduce 20%
14/12/13 11:58:18 INFO mapred.JobClient: map 80% reduce 26%
14/12/13 11:59:15 INFO mapred.JobClient: map 100% reduce 26%
14/12/13 11:59:27 INFO mapred.JobClient: map 100% reduce 33%
14/12/13 11:59:46 INFO mapred.JobClient: map 100% reduce 100%
14/12/13 12:00:40 INFO mapred.JobClient: Job complete: job_201412131131_0001
14/12/13 12:00:41 INFO mapred.JobClient: Counters: 30
14/12/13 12:00:42 INFO mapred.JobClient: Job Counters
14/12/13 12:00:42 INFO mapred.JobClient: Launched reduce tasks=1
14/12/13 12:00:42 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=976959
14/12/13 12:00:42 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/12/13 12:00:42 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/12/13 12:00:42 INFO mapred.JobClient: Launched map tasks=6
14/12/13 12:00:42 INFO mapred.JobClient: Data-local map tasks=6
14/12/13 12:00:42 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=421500
14/12/13 12:00:42 INFO mapred.JobClient: File Input Format Counters
14/12/13 12:00:42 INFO mapred.JobClient: Bytes Read=590
14/12/13 12:00:42 INFO mapred.JobClient: File Output Format Counters
14/12/13 12:00:42 INFO mapred.JobClient: Bytes Written=97
14/12/13 12:00:42 INFO mapred.JobClient: FileSystemCounters
14/12/13 12:00:42 INFO mapred.JobClient:
FILE_BYTES_READ=116
14/12/13 12:00:42 INFO mapred.JobClient: HDFS_BYTES_READ=1210
14/12/13 12:00:42 INFO mapred.JobClient: FILE_BYTES_WRITTEN=305403
14/12/13 12:00:42 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=215
14/12/13 12:00:42 INFO mapred.JobClient: Map-Reduce Framework
14/12/13 12:00:42 INFO mapred.JobClient: Map output materialized bytes=140
14/12/13 12:00:42 INFO mapred.JobClient: Map input records=5
14/12/13 12:00:42 INFO mapred.JobClient: Reduce shuffle bytes=140
14/12/13 12:00:42 INFO mapred.JobClient: Spilled Records=20
14/12/13 12:00:42 INFO mapred.JobClient: Map output bytes=90
14/12/13 12:00:42 INFO mapred.JobClient: Total committed heap usage (bytes)=1021792256
14/12/13 12:00:42 INFO mapred.JobClient: CPU time spent (ms)=73380
14/12/13 12:00:42 INFO mapred.JobClient: Map input bytes=120
14/12/13 12:00:42 INFO mapred.JobClient: SPLIT_RAW_BYTES=620
14/12/13 12:00:42 INFO mapred.JobClient: Combine input records=0
14/12/13 12:00:42 INFO mapred.JobClient: Reduce input records=10
14/12/13 12:00:42 INFO mapred.JobClient: Reduce input groups=10
14/12/13 12:00:42 INFO mapred.JobClient: Combine output records=0
14/12/13 12:00:42 INFO mapred.JobClient: Physical memory (bytes) snapshot=726032384
14/12/13 12:00:42 INFO mapred.JobClient: Reduce output records=0
14/12/13 12:00:42 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2161262592
14/12/13 12:00:42 INFO mapred.JobClient: Map output records=10
Job Finished in 758.142 seconds
Estimated value of Pi is 3.14800000000000000000
CHAPTER 6 DISCUSSION
6.1 DISCUSSION
This proposed system is basically now installed in the heavy weighted operating system. It decreases the performance of the system. Next our team planned to implement this architecture in the lightweight operating system which is also compatible to run hadoop framework. And the java version used in this implementation is the common java platform that is compatible for Linux system architecture. So need to introduce byte code compiler to increase the performance of this system. This system currently having infrastructure to equip minimum amount of data volume to process in a parallel manner. But this is the major initiation towards multi node Lightweight compact distributed architecture for various demand based Big Data process. Also the introduced proposed model is basically an innovative step to next generation industrial findings of industry 4.0 initiation. Now this model is successfully deployed to load data and process those information in parallel manner. So far we facing the issue to collaborate the world of embedded with computation system. But this model can having capable to collect information via GIPO from machineries and establishing the new era of cyber physical system to satisfy the needs of industry 4.0 revolution and it also an IOT, which is capable to support BigData Processing, and in addition to that provide platform for data distribution from its hybrid solution. With low power consumption and not necessity to provide any special cooling systems as per the common environment it may vary based on the industrial environment standard.
CHAPTER 7 CONCLUSION
7.1 CONCLUSION
The Result proven that the deployment of single node cluster on ARM Architecture successfully executed Pi task in Single board compact portable computer Raspberry-pi. And this system is constructed with in the cost less than 3000 INR. Equivalent to 48$ approximately. This covers the primary unit of the single node architecture excluding the I/O & Displays. This model is capable of collecting sensor information via GIPO and USB which can distribute data among the interconnected Smart computing system with the help of Hadoop framework and also capable to do parallel processing in physically clustered interconnected network architecture with least fault tolerance.
REFERENCES
[1]. Vijay Kumar S, Saravanakumar S.G., Revealing of NOSQL Secrets. CiiT Journal.vol2,no10 (Oct.2010), 310314. URL=http://www.ciitresearch.org/dmkeoctober2010.html.
[2]. Vijay Kumar S, Saravanakumar S.G., “Implementation of NOSQL for robotics”, Publisher: IEEE (Dec.2010). DOI=10.1109/INTERACT.2010.5706225.
[3]. Vijay Kumar S, Saravanakumar S.G., Future Robotics Memory Management Future Robotics Memory Management, Publisher: Springer Berlin Heidelberg, Year 2011. DOI=10.1007/978-3-642-24055-3_32.
[4]. Jeffrey Dean and Sanjay Ghemawat, “MapReduce: Simplified Data Processing on Large Clusters”, OSDI 2004.
[5]. Accenture, “Big Success with Big Data” April 2014Available at URL: http://www.accenture.com/us-en/Pages/insight-big-success-big-data.aspx.
[6]. Jeffrey Dean and Sanjay Ghemawat, “MapReduce: Simplified Data Processing on Large Clusters”. Communications of the Acm January 2008/Vol. 51, No. 1.
[7]. Alex Holmes, “Hadoop in Practice’. Publisher: Manning, Shelter Island. Year: 2012. ISBN: 9781617290237.
[8]. Debian Operating system. URL : https://www.debian.org/releases/stable/
[9]. ARM Architecture. URL : http://arm.com/
[10]. Java, JVM, JSP. URL:https://docs.oracle.com/javase/7/docs/technotes/tools/share/jps.html
[11]. Raspberry PI Foundation. URL: http://www.raspberrypi.org/
[12]. Design and Implementation of a Web Service-Oriented Gateway to Facilitate Environmental Modeling using HPC Resources by Ahmet Artu Yıldırım , David Tarboton, Pabitra Dash and Dan Watson
[13]. Evaluating and Improving the Performance and Scheduling of HPC Applications in Cloud by Abhishek Gupta, Paolo Faraboschi, Fellow, IEEE, Filippo Gioachin, Laxmikant V. Kale, Fellow, IEEE, Richard Kaufmann, Bu-Sung Lee, Verdi March, Dejan Milojicic, Fellow, IEEE, and Chun Hui Suen.
[14]. TibidaboI: Making the Case for an ARM-Based HPC System by Nikola Rajovica,b,_, Alejandro Ricoa,b, Nikola Puzovica, Chris Adeniyi-Jonesc, Alex Ramireza,b
[15]. Enhancing High-Performance Computing Clusters with Parallel File Systems by Dell Power Solutions, May 2005.
[16]. EFFICIENT SUPPORT FOR PARALLEL FILE ACCESS IN JAVA HPC by Ammar Ahmad Awan.
[17]. http://java.dzone.com/articles/hadoop-and-high-performance
[18]. http://www.admin-magazine.com/HPC/Articles/Is-Hadoop-the-New-HPC
[19]. Systematic Literature Review and Survey on High Performance Computing in Cloud -- Karthik Paladugu Sumanth Mukka.
[20]. A large-scale study of failures in high-performance computing systems - Bianca Schroeder Garth A. Gibson Computer Science Department, Carnegie Mellon University Pittsburgh, PA 15217, USA
[21]. http://www.admin-magazine.com/HPC/Articles/Moving-HPC-to-the-Cloud
[22]. Internet of Things (IoT): A vision, architectural elements, and future directions. Jayavardhana Gubbi, Rajkumar Buyya, Slaven Marusic, Marimuthu Palaniswami J. Gubbi et al. / Future Generation Computer Systems 29 (2013) 1645–1660.
[23]. Recent Advances and Trends of Cyber-Physical Systems and Big Data Analytics in Industrial Informatics. International. Proceeding of Int. Conference on Industrial Informatics (INDIN) 2014.
- Quote paper
- Vijaykumar Selvam (Author), M. Balamurugan (Author), S.G. Saravanakumar (Author), Sharein Sudalayandi Krishnan (Author), 2016, UNIQUE SENSE. A Smart Computing Prototype, Munich, GRIN Verlag, https://www.grin.com/document/339804
-
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X.