Site Loader

ABSTRACT Data growth rates will proceed tobuild faster in the coming years. Cloud computing provides a new way of serviceprovision by arranging various resources over internet. One of the importantcloud service amongst the existing services is storage of data. Data savedmight hold numerous copies of the same data.

Data deduplication is one of the vitaltechniques, which compress the data by removing the duplicate copies of thesame data to reduce the storage space. In order to provide the data protection whichis to be stored on cloud, data are need be stored in the encrypted form. Inproposed scheme the main purpose of this is to ensure that only one instance ofdata is stored, minimizes the amount of storage space, and provides optimizedstorage capacity.

Here we design a effective approach which effectively reducesencryption overhead using compression and encryption method. INTRODUCTION         Cloud computing is an IT that enablesaccess to shared pools of Configurable framework assets and more higher-leveladministrations that can be quickly provisioned for insignificant managementexertion, often over the Internet. Cloud computing services all work a littledifferently. Cloud computingrelies on sharing of resources to achieve coherence and economy of scale, similar to a utility. Butmany provide a friendly, browser-based dashboard that makes it easier for ITprofessionals and developers to order resources and manage their accounts. Somecloud computing services is also designed to work with APIs and CLI, providesdeveloper more option. Some of the services that can be done with the cloud arecreating new app and service, storing, back up and recover data, stream audioand video.

Cloud provides three types of services: IASS, PAAS, SAAS and threecloud deployments: public, private and hybrid.The idea of datadeduplication was proposed to minimize the storage space. It is also called asintelligent compression or single instance storage. In this paper we design anddevelop a new approach that effectively deduplicates redundant data in documentby using the concept of object level component resulting to less data chunking,uses fewer indexes and reduced need for tape backup.

This technique focuses onimproving utilization and also can be applied to network data transfer toreduce number of bytes that must be sent. Data deduplication can operate atfile level, block level and even at bit level. In file-level datadeduplication, if any two files are exactly alike then only one copy of fileneed to be stored then subsequent iteration will have a pointer to files. Thechange in the single bit will need to store entire copy of different file. Inblock-level and bit-level data deduplication it looks within a file, if file isupdated then it saves only the changed blocks between the two files. Howeverfile-level may require less processing power due to smaller index and reducethe number of comparison but in block- level may take more processing power anduse much larger index to track the individual block.


Post Author: admin


I'm Dora!

Would you like to get a custom essay? How about receiving a customized one?

Check it out