Just connecting ‘things’ and collection of data doesn’t fulfill true promises of IoT. Analysis of these data and information with great analytic power can provide real values but in the conventional programmable computing approach, the data is guided through a sequence of prearranged process to provide the results. This rigid process limits the IoT in acknowledging many problem of complex, real time, evolving world. The information extracted with these devices are complex, high volume and unpredictable,so analysis of these information with prearranged set up never suits for efficient results.This limitation can be explained with example of one application scenario of weather prediction. There are number of usage to weather predictions. Weather warnings are important forecasts because they are used to protect life and property. Application scenario: Let temperature, humidity, wind speed, visibility, rainfall, pressure and wind direction sensors are deployed on number of stations and after analyzing the data collected from sensors prediction of rain, fog, drizzle, snow and thunder is done. Processing these data with programmable computing will never give the needed accuracy in prediction as we are underlying the facts of aging effects, environmental condition effect on deployed sensors. In most computational algorithm and conventional learning techniques it was considered that these data generated from fixed probability distribution and may called stationary, but due to these effects data from IoT devices are non stationary. The stationarity means here is that the data are generated from a fixed, even though not known probability density function. But in case of practical applications, the data generated from sensors this type of assumption is not valid, because the process generating these types of data may no longer stationary. The probability distribution function (Pdf) of data is evolving or drifting with the time or we can say that non-stationary data is generated.