2.1. VolumeVolume is the measurement of available data amount of all generated data types from different resources that are continuing to expand in size up to 2.5 Exabyte 1, 3, 7. The size of data is becoming lager than terabytes ( ) AAK1 and petabytes AAK2 ( ), where it supposed to increase to zettabytes ( ) AAK3 and this rise of data outstrips the current traditional store and analysis techniques 2, 5, 17, 30. There is a benefit of availability and gathering of huge amount of data that allows the creation of hidden patterns and information via data analysis, where it provides an opportunity to predict the future patterns and share the analysis’s results among research communities 3.AAK4 Yadranjiaghdam et al 14, mentioned that the size of digital data in 2011 was estimated as 1.8 Zettabytes (1.8 * ), and the predicted volume of data by 2020 will be 50 times more (50*1.8 * ), this increase is due to the generated data using technologies such as smartphones and internet. Also, the quantum of generated, stored and used data is explosive now that encourages several organizations to provide different estimations and forecasts 20. Khan et al 25, highlighted that Industrial Development Corporation (IDC) and EMC Corporation, predicted the amount of data generated in 2020 that will be 44 times greater (40 zettabytes) than in 2009, where this rate of increase is expected to persist at 50% to 60% annually. This increase of data enforce that increase of capacity of HDDs storage 25. Volume refers to the sheer amount of data available for analysis, that volume of data is a driven by increasing the number of data-collection tools as well as the ability to store and transfer those data with advanced data storage and networking tools and applications 31. Gandomi et al 36, defined volume as the data magnitude where the sizes are reported in multiple terabytes and petabytes.2.2. VelocityYadranjiaghdam et al 14 highlighted that there are huge amounts of big data sets that are produced and generated rapidly every second. Velocity is the measurements of the speed of data creation, streaming, aggregation, and transmission, that is coming from different resources 2, 3, 7. In addition, it covers the management of the processing, transformation, and transition of the data also known as the flow of the data 2, 5, 7. Organizing, accessing and processing the data as it is collected to be included in the decision making in real-time applications is usually the most important technical challenge 14. The real-time data is accessible using many technologies such as smartphones, RFID, online transactions and etc. 20. That indicates the generated real-time data from all such sources can be accumulated with the speed at which they are generated 20. The high velocity is being created in real-time manner 17. The flow of real-time and streaming data can help researchers and business analysts to make valuable decisions on the basis of values 30. Velocity refers to both the speed at which these data-collection events can occur, and the pressure of managing large streams of real-time data 31. Gandomi et al 36, defined velocity as the rate at which data are generated and the speed at which it should be analyzed and acted upon.2.3. VariabilityMoorthy et al 20, defined the variance of data as the information content in the data with consideration of difference of temporal and spatial data at different levels. Variability is considered as a challenge due to the inconsistency of the data flow and the variance in the amount of data 2. Japec et al 31, mentioned that the variability is referring to inconsistency of the data across time. Gandomi et al 36, defined variability as variation in the data flow rates.2.4. ComplexityComplexity is the measurement of the interconnection between the gathered data from different resources 2, 7. Complexity is also measured by the data structure, hierarchies, and correlated relationships that continuously change based on the influenced collected data 7. Japec et al 31, mentioned that the complexity is the need to link multiple data sources. Gandomi et al 36, defined complexity as the fact that big data are generated through a countless of sources.2.5. ValueValue is the measurement of data usefulness that elaborates in discovering the hidden values from the collected amount of data 3, 7. Kaisler et al 7AAK5 mentioned that the organizations emphasize on the value produced as a result of big data analysis. In addition, Kaisler et al 7 conducted a survey that suggested five generic areas where big data can support value creation for organizations such as:1. Availability of big data for business analysis creates transparency that increases the product quality and reduces cost and time.2. Supporting an experimental analysis in different locations that can measure decisions or approaches.3. Assisting in defining market segmentation on different levels.4. Supporting real-time analysis and decisions based on sophisticated analytics applied to data sets.5. Facilitating computer-assisted innovation in products based on embedded product sensors indicating customer responses. AAK6 This property is considered as an important one because it helps in predicting the behaviors based on the gathered data, which is toward enhance decision-making process 7. Also, there is always a hidden and unknown valuable information in the huge amount of the stored data that need to be extracted in order to be utilized and used to enhance the business operation 14. Gandomi et al 36, mentioned that Oracle has introduced value characteristic and defined it as the data received in its original form usually has a low value relative to its volume. How-ever, a high value can be obtained by analyzing large volumes of such data 36.2.6. VeracityYadranjiaghdam et al 14, introduced veracity, where it is emphasized on the quality of captured dataset that depends on accuracy of the analysis process. Veracity is one of important features that provides authenticity of data with automation of data gather and capture 20. Veracity emphasized on the clearance and cleaning of received data to simplify the process of data analysis 30. Japec et al 31, mentioned that the veracity is the ability to trust that the data are accurate. Gandomi et al 36, mentioned that IBM has defined the veracity characteristic as the representation of unreliability that can be inherent in some sources of data.2.7. Validity Validity is the methodology that the data should be representing the expected concept 20.2.8. VenueVenue is considered as the data platforms, databases, data warehouses, format heterogeneity, data generated for different purposes from different resources 20.2.9. VocabularyMoorthy et al 20, introduced “vocabulary” characteristic, which is collecting the new concepts, definitions, theories and technical terms that are entitled with big data; they were not necessarily required in the earlier context, such as MapReduce, Apache Hadoop, NoSQL and MetaData 20.2.10. VaguenessIt is related to the confusion about the meaning and overall developments around big data 20. Vagueness is not necessarily considered as a characteristic of the big data deployment, but it reflects the current context 20.2.11. ExhaustiveKitchin 17, big data is exhaustive in scope, where it is striving to capture entire populations of datasets and data with larger sample sizes than the used data in traditional or small data studies or systems.2.12. Fine-grainedKitchin 17, the big data is fin-grained in resolution, where it is aiming to be as detailed as possible, and uniquely indexical in identification.2.13. RelationalKitchin 17, big data is relational in nature, where it is containing common fields that enable the conjoining of different data sets.2.14. FlexibleKitchin 17, big data is flexible, where it is holding the traits of extensions by allowing the addition of new fields easily and expansion in data size.