How can we fix the missing link holding the blockchain back?

The main obstacle to leveraging blockchain technology in the next wave of digital transformation is the huge gap between the amount of data being generated and the ability of companies to effectively harness and use it.

The global digital supply chain market is projected to grow from $3.92 billion in 2020 to $13.68 billion by 2030, with digitization seen as key to ensuring the necessary agility and resilience across operations to tackle today’s rapidly changing landscape . The headlines suggest that blockchain technology will play a crucial role in this growth, allowing data to be shared quickly and securely between different organizations.

Yet blockchain is just a data store – by itself it is not the “silver bullet” that will unlock the next wave of transformation. The real key to success lies in the collection of data in the first place and the construction of the records that will underpin the digitization. Unless organizations collect the right data in the first place – and leverage it effectively – even the best new technologies will fall victim to the age-old “garbage in, garbage out” problem.

Given the complexity of modern supply chains, the number of influencing factors and the number of separate organizations being connected, the latest next generation technology is extremely attractive. In particular, blockchain’s ability to provide many stakeholders with access to trusted data across a supply chain network can provide major advances.

Technologies such as blockchain must be fed with data. There is absolutely no shortage of data. However, the missing link in blockchain technology is the huge gap between the amount of data being generated and the businesses that can harness and use that data. Blockchain is not capable of bridging this gap, as that is not what it was designed to do.

See also  Global | Global Environmental Analyst

Accessing data in the supply chain is very challenging. Relevant data may reside in a separate business unit or an external organisation. Without the whole picture, it is difficult to maximize value. Data lakes are often seen as a potential solution, as they allow massive amounts of data to be centralized – meaning stakeholders can connect to the data lake and access the data they need. However, this approach can lead to data protection issues between organizations.

Some blockchain providers offer an enhancement where participants can upload data and use permission-based rules to control which participants can see their data. But with so much data, companies struggle to know and manage which data to share with which organization.

There is also the issue of the wide range of data types involved. For this challenge to be overcome, the data must first be structured so that many different data types can be brought together in a coherent way. For example, all data relating to a shipment must be available as a single data object or data product. This data product will include a variety of data from several systems, with an ontology that makes the data fit together nicely.

Access to structured data products, as opposed to many streams of different data types, enables users to access and analyze all relevant data easily and quickly – allowing more complex queries to be run and much greater insights to be generated.

The real key to bridging the data divide is digital twin technology. By making it possible to create real-time models of real objects in the digital world, it makes them available to all participants in a supply chain via a central platform. The digital twins include all relevant data for a shipment, and are dynamically updated in real time with new data as the shipment moves through the supply chain.

See also  Crypto, blockchain hacks cross $ 2 billion in first half year, to grow 3.2 times in 2022

Digital twin technology offers a way to combine data from many different sources to deliver a single data product that can be accessed, on a permission basis, by stakeholders across the supply chain network – enabling them to access all data easily and efficiently. Unlike traditional object-oriented databases, the digital twin is temporary – only existing during the life cycle of a broadcast, after which the digital twin disappears from the central platform.

The technology to make this scenario possible is intelligent data orchestration. Using the central digital twin concept, only relevant data is retrieved from connected systems – with little or no effort required from the respective domains. The automated technology brings the disparate data together, structures it to form a complete data product and dissolves it at the end of its operational life cycle – providing a solid data foundation to unlock the next generation of digital transformation.

Toby Mills is founder and CEO of supply chain visibility firm Entopy.

Sign up for the E&T News email to get great stories like this delivered to your inbox every day.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *