Big Data in Logistics. The new Juice.

The data economy, Industry 4.0, ML/IA, XaaS, mining. Which part of the data story is touching the Shipping industry?

The new in how businesses operate is the tooling. Tools are set in data now.

In a couple serialized posts, we’ll hand out the business case for the Big Data for Logistics. (1) User’s viewport: The motivation, (2) The Data itself (3) The Processing and Analytics (4) The presentation layer (5) New products and use cases outside core operations.

In the read below, find about end user touching on ‘Big Data’ real time: a transport company, having a fairly integrated ERP (web and industry data feeds) making the effort to get the full benefit.

To come to terms, you need no prior knowledge in system architecture or big data engineering. This is just proof of concept on a realistic use case.

In a nutshell, the typical data solution has three main layers

  • raw data layer (capture, observe or sample),
  • analytics layer – shipshape the raw into manageable cube to work with and
  • presentation layer – to sell, storytell or self-serve by other users.

To set the scene.

Business operations are carried on systems.  No surprise here. Systems being companies’ own or on-premises, web, cloud, platform, XaaS. Most also operate Hybrid models which is for a mix. The longer the firm has operated, the more complex it gets, as new solutions port with legacy ones and differing concepts get layerred. In any case, systems are key for working with Data.

Second, a ‘paradigm’ development is well underway for the business processes to get standardized, pooled, shared, outsourced.  That’s for support functions. Standardization also helps for integrating vendor, customer or other affiliates’ data.

Third, the lift and shift towards data is happening during the ‘constant and relentless’ drive for innovation. This is usually for the product. In consumer facing operations, this one is the leading factor.

Logistics industry is the ultimate process industry. Much like banking, telcos and other services. The process steps for a shipment (take it as a unit of production) are algorithmic and susceptible to slicing, dicing, packaging, commoditization.

Motivational scenario: the user’s viewport on the Logistics Infrastructure Data. The data system.

Take the fledgling Container shipment from the earlier articles. It shipped alright and a few times over. Now you’ve just started warming up to the LCL demand. Say it’s new ground for you, but your customers are asking for it. Shouldn’t be much of a tooling or hassle. You tally up the regulars volumes you could win and there are fresh inquiries you could tap.

You reckon there is enough volume to pack them in a larger box (40’HC – that’s 40 feet high cube, 76m3) and not just resell a tiered offering. You would market it as a regular service, every other week or so.  You can even put your logo and details on the new service, assuming you qualify for issuer of transport documents. You decide to give it a go and see how it works. The formalities are all set and you partner with formidable agent to do loading out of China. That agent runs own premises to take the smaller shipments in, stuff the consol box and do the paperwork on a profit share.

Inside the consol box.

Now let’s move to production. As a forwarder, you need to keep track and be precise and have a good handle on the details. The 40’HC you stuff is 76m3 nominal, and you could only use about 85-90% (rule of thumb for the broken stowage) . An average LCL is perhaps 6-to-10m6, lightweight cargo. So 10 shipments for simplicity. Is it 10 times the effort? Or 11 with the one-off consol? Or 20 for spot checking each LCL yet again with the consol? How do you automate, streamline and not get overwhelmed?

Data feeds we don’t even notice:
  • external – ports, route, transport mode, equipment type, sailing schedules, vessels and aircraft call/tailsigns, asset owner operator, general cargo description
  • internal (your system repositories) – shippers, receivers, agents, special instructions, commodity details and marking, documents, cost (also external) and sale prices – all protected proprietary/internal,
  • financials (for customers, suppliers, authorities), from the accounting module, customer credit.
  • customs – HS codes, EU and other regional codification,
  • Hazardous Materials – Codes and UN, IMO, ADR
  • Templates for the transport documents AWB, HAWB, Bills of Lading, HBL, CMR, CIM, mandatory fields, terms of service, layout, required fields, print-ready templates

Do a reality check to test your setup:

  • open your system, weed through to start a blank consolidation, pick route, container type and sailing schedules (screens).
  • or copy an old one into a draft – just by using search functionality by route and then equipment and start filling it up with groupage.
  • or start an LCL with the minimum details for a saved draft (screen) and copy-multiply across the rest of the shipments as new entries, changing only when necessary (e.g. shippers, cargo details, documents’ references). Attach those that will make the nearest sailing to a firm consol
  • split (transform) a full container into parcels (errors, reroute or change receivers), or do a 2-containers consolidation? (if you can imagine a case for it). Will system remap tariffs, and change settings?
  • can you change the consol once it nearly filled up? Say one small parcel was late from manufacturing and need to just drop it.  

If these simple tests went through, you know your ropes.

In the next series we’ll take a look at the analytics layer and process mining and what is really actionable there so you could improve your operations, i.e. sell more or spend less.

Stay tuned,  health,

Systems.dintro.net | systems@dintro.net

Leave a Reply

Your email address will not be published.

Scroll to top