The Impact of Big Data on the Future

Akshaya Balaji
4 min readSep 17, 2020
Photo by NASA on Unsplash

The term ‘big data’ has been going around since the early 2000s. It refers to large and complex datasets that cannot be collected, stored, or processed using conventional systems. The name is slightly misleading, as it points only to the aspect of the volume of data being considered. However, technology has changed and grown in both scope and complexity over the last few decades, and two more aspects of data have become prominent — the velocity and variety of data. Together with volume, they comprise the three Vs of big data.

The volume of data is still significant because the world is generating millions of bytes of data daily. Big data includes data collected from a variety of sources like social networking sites like Facebook and Twitter, search engines like Google, and even services like Netflix. Handling such large volumes is definitely hard with the traditional data processing systems. The velocity of data is also important since it indicates the rate at which data is received or processed. Data can be structured, semi-structured, or unstructured, and the latter two types need more processing than the structured type of data.

Today, the three Vs of big data have expanded to include more aspects like veracity, value, etc. This serves to increase the complexity in handling the data generated or collected within many constraints.

Figure: V’s of Big Data (Source: Google Images)

Big Data is expected to have a significant effect on the world of data analysis and other data-oriented services in the coming years. In the Globe Newswire, the analysis of the Global Data Analysis Market was published, with forecasts till the year 2027. Quoting the report,

The Global Big Data Analytics Market was valued at US$ 37.34 billion in 2018 and expected to reach US$ 105.08 billion by 2027 at a CAGR of 12.3% throughout the forecast period from 2019 to 2027.

There are numerous other statistics that provide more insight into how the handling of big data can shape the markets in the years to come. Thus, we can infer that Big Data is surely going to play a vital role in every technological advancement coming our way.

But why is big data so important?

Big data is very valuable because of the uses of the data being collected and there is a multitude of fields that use Big Data in different ways. For example, companies like Netflix use big data to predict and handle customer demand. The data collected can be used to build predictive models (recommendation systems) to promote shows that are similar to those already watched by users. This helps them provide better customer service and experience. Similarly, Big Data is also very helpful for decision making in fields like stock market exchanges and product development.

So, how do we handle big data?

Handling big data is very critical because we need to adopt some methodology that allows us to collect, store, and process large volumes of data of different varieties. There are different methodologies defined in the industry today, but all the methodologies can be implemented only after satisfying the requirement of storage. However, big data requires storages in petabytes or even exabytes. Thus, conventional storage systems will not be of much help in storing big data.

Storing a large amount of data in a single device is possible, but it brings up the issue of cost to manufacture such storage devices, and the time required to read and/or write data in the storage device. Hence, we need an alternative for single-point storage of data. The immediate solution would be to store different parts of data in different storage devices, with each device connected to the others through a network. The connected devices can then work together to retrieve the required parts of the data from the respective storage. This method of storing data is called a distributed storage system.

Distributed storage systems involve a central device (master node), which can use the storage provided by one or more worker nodes. The aggregated storage provided by the worker nodes enables large volumes of data to be stored in blocks. Distributed storage is provided along with data processing capabilities through many products in the industry like Hadoop, MongoDB, etc.

Conclusion

Big data might be a term that highlights the volume of data, but it encompasses so much more. It requires multiple technologies like storage solutions, processing methods, etc. to work in tandem to truly utilize the data generated or collected to the maximum.

This article is simply a glimpse into what big data is, and how we can handle the first challenge of big data, which is storing large volumes of data.

--

--

Akshaya Balaji

MEng EECS | ML, AI and Data Science Enthusiast | Avid Reader | Keen to explore different domains in Computer Science