Google scholar arxiv informatics ads IJAIS publications are indexed with Google Scholar, NASA ADS, Informatics et. al.

Call for Paper

-

April Edition 2021

International Journal of Applied Information Systems solicits high quality original research papers for the April 2021 Edition of the journal. The last date of research paper submission is March 15, 2021.

Solving Big Data Problem using Hadoop File System(HDFS)

Smita Chaturvedi, Nivedita Bhirud, Fiona Lowden Published in Databases

IJAIS Proceedings on International Conference and Workshop on Communication, Computing and Virtualization
Year of Publication: 2015
© 2015 by IJAIS Journal
Download full text
  1. Smita Chaturvedi, Nivedita Bhirud and Fiona Lowden. Article: Solving Big Data Problem using Hadoop File System(HDFS). IJAIS Proceedings on International Conference and Workshop on Communication, Computing and Virtualization ICWCCV 2015(3):23-28, September 2015. BibTeX

    @article{key:article,
    	author = "Smita Chaturvedi and Nivedita Bhirud and Fiona Lowden",
    	title = "Article: Solving Big Data Problem using Hadoop File System(HDFS)",
    	journal = "IJAIS Proceedings on International Conference and Workshop on Communication, Computing and Virtualization",
    	year = 2015,
    	volume = "ICWCCV 2015",
    	number = 3,
    	pages = "23-28",
    	month = "September",
    	note = "Published by Foundation of Computer Science, New York, USA"
    }
    

Abstract

The data which is useful not only for one person but for all, that data is called as Big data or It's a data to be too big to be processed in a single machine is known as Big data. Big data are the data which are extremely large in size that may be analyses computationally to disclose the patterns, associations and trends etc. For Example: Many users visited the amazon site; in particular page for how many user visit that page, from which IP address they visit the page, for how many hours they visit the page etc information stored in the amazon site is known as the example of Big data. Huge amount of data is created by phone data, online stores and by research data. Potentially data is created fast, the data coming from different sources in various formats and not most data are worthless but some data does has low value. Hadoop solves the Big data problem using the concept HDFS (Hadoop Distributed File System). In this paper the running of map reduce code in apache Hadoop is shown. Hadoop solves the problem of Big data by storing the data in distributed form in different machines. There are plenty of data but that data have to be store in a cost effective way and process it efficiently.

Reference

  1. The diverse and exploding digital universe. http://www. emc. com/digital universe, 2009.
  2. Hadoop. http://hadoop. apache. org, 2009.
  3. HDFS (hadoop distributed file system) architecture. http://hadoop. apache. org/common/docs/current/hdfs design. html, 2009.
  4. R. Abbott and H. Garcia-Molina. Scheduling I/O requestswith dead-lines: A performance evaluation. InProceedings of the 11th Real-TimeSystems Symposium, pages 113–124, Dec 1990.
  5. G. Candea, N. Polyzotis, and R. Vingralek. A scalable, predictable joinoperator for highly concurrent data warehouses. In35th InternationalConference on Very Large Data Bases (VLDB), 2009.
  6. The Hadoop Distributed File System : Balancing Portability, A. Hemanth, Dr. R. V. Krishniah (International Journal of Computer Engineering & Applications, Vol. III, Issue III, ISSN 2321-3469)
  7. The Data Recovery File System Systems for Hadoop Cluster- Review Paper, V. S. Karwande, Dr. S. S. Lomte, Prof. R. A. Auti (ISSN:0975-9646)
  8. The book titled with " Hadoop : The Definitive Guide" By Tom White
  9. The book titled with "Hadoop Operations" by Eric Sammer

Keywords

Big data, mapreduce, 3V, Eco System, HDFS, Hadoop.