|VOLUME 14, ISSUE 1, 1 October, 2018 To 31 December, 2020|
|VOLUME 14, ISSUE 1, 1 November, 2018 To 31 December, 2020|
|VOLUME 13, ISSUE 1, 1 April, 2017 To 30 June, 2017|
|VOLUME 12, ISSUE 1, 1 January, 2017 To 31 March, 2017|
|VOLUME 11, ISSUE 1, 16 October, 2016 To 31 December, 2016|
|VOLUME 9, ISSUE 1, 15 April, 2016 To 14 June, 2016|
|VOLUME 8, ISSUE 1, 15 January, 2016 To 14 April, 2016|
|VOLUME 7, ISSUE 1, 15 October, 2015 To 14 January, 2016|
|VOLUME 6, ISSUE 1, 15 July, 2015 To 14 October, 2015|
|VOLUME 5, ISSUE 1, 16 April, 2015 To 15 July, 2015|
|VOLUME 4, ISSUE 1, 16 January, 2015 To 15 April, 2015|
|VOLUME 2, ISSUE 1, 16 August, 2014 To 15 November, 2014|
|VOLUME 1, ISSUE 1, 15 June, 2014 To 15 August, 2014|
Fault Tolerance in cloud is a major concern to guarantee the availability and reliability of critical services as well as application execution. In order to minimize failure impact on the system and application execution, failures should be anticipated and proactively handled. Many research issues are required to be fully addressed in cloud such as Fault tolerance, workflow management, workflow scheduling, security, etc. The task of offering fault tolerance as a service requires the service provider to realize generic fault tolerance mechanisms such that the clientaETMs applications deployed in virtual machine instances can transparently obtain fault tolerance properties. To this aim, we offer to apply hybrid mechanism of VM migration to reduce the occurrences of the faults and a pre-copy mechanism is proposed to be applied for recovering the instance of the VM when fault occurs.
Hadoop users specify the application computation logic in terms of a map and a reduce function, which are often termed MapReduce applications. The Hadoop distributed file system is used to store the MapReduce application data on the Hadoop cluster nodes called Datanodes, whereas Namenode is a control point for all Datanodes. While its resilience is increased, its current data-distribution methodologies are not necessarily efficient for heterogeneous distributed environments such as public clouds.Hadoop has two core components: HDFS to store large data with reliability and MapReduce is a programming model which processes the data in parallel and distributed manner. Hadoop does not perform well for small files as a large number of small files pose a heavy burden on the NameNode of HDFS and an increase in execution time for MapReduce is encountered. Hadoop is designed to handle huge size files and hence suffers a performance penalty while dealing with large number of small files. This research work gives a new approach to handle small files over Hadoop with less performance penalties. In proposed approach, merging of small file is done using MapReduce programming model on Hadoop. This approach applies the clustered approach for merging of the files so that merged files shall not be of unexpectedly greater size. The clustered approach will also use less amount of memory on MapReduce Server after margining of files hence will improve the performance of Hadoop.
Application of Cloud computing has increased by leaps and bounds. It is a way of computing where service is provided across the internet using the models and levels of abstraction. With the increase of applications issues are increasing more and more such as Fault tolerance, workflow management, workflow scheduling, security, etc. Fault Tolerance in cloud is required for both VM migration and Task Migration which are a major concern to guarantee the availability and reliability of critical services. To minimize failure effects on the system and its execution, failures should be pre-decided and handled proactively. To achieve this, we define a new aggregation of fault tolerant schemes for migration of VM instances. Fault tolerance mechanisms can be applied directly at the virtualization layer than the application itself.