• Dânia Meira

Railway maintenance and autonomous vehicles: more similar and more different than you’d think

Irina Vidal Migallón, Technical Lead AI & Computer Vision, Siemens Mobility


#datalift use case in production

Presented live at #datalift No 1 on 25 November 2020


Irina Vidal Migallón, Technical Lead AI & Computer Vision

“Deployment must inform all the steps taken before. For instance, if you're doing detection and were to use the most massive, most powerful detector out there, but you find you don't have the hardware to run it, then you need to go back to the whiteboard and choose something else.”


Siemens Mobility has a division focusing on computer vision and machine learning for different products around mobility, especially also in the railway domain. Products range from passenger assistance to autonomous driving. For example, we increase comfort for travellers by finding spots for a wheelchair or detecting aggression to improve safety. For driving, our approach includes handling typical issues like obstacle detection and track segmentation.


Monitoring the state of rail tracks and autonomous driving have many tasks in common: detect obstacles, detect people, detect assets, detect damages. As pre-production work, it may seem a no-brainer to approach them in similar ways. The devil is in the details, though, which happens to be 90% of the product. Sensors, deployment platform, processing hardware, feedback loop, laws: they have a more significant impact than the machine learning technique itself.


The difference deployment makes

We often see issues around computer vision and machine learning that look similar, actually, deceivingly similar. I say ‘deceivingly’ because even though the challenges may point to the same machine learning solution, the solution is part of differing products, and we deploy each differently. Deployment is more than a click. You iterate to find the best solution for your customers and users and, for example, with your DevOps to check for feasibility.

At Siemens Mobility, we are working on different use cases for a variety of users. Consider the usual machine learning cycle from data processing on to training and then, finally, to deployment. That final step, the deployment, makes the difference. It makes every use case different. I'm just going to show you a couple of use cases that, at their core, have the same machine learning concept. But you'll see how they look nothing like each other precisely because of how they're deployed.



Use case 1: Inspection of rail track joints

The first use case is about the inspection of rail track joints. We gather data by specially equipped rail vehicles that run around collecting images. At its core, the machine learning task is to locate the joints and determine whether the joints are damaged (or not) and what type the damage is.

For the first use case, our customer is a rail maintainer. Rail maintainers make sure that the railway is safe, for example, by checking the joints of kilometers and kilometers of rails across countries and identifying any damage happening to those joints. In the past, either someone needed to survey the rail track or manually go through many images. Most of the footage has nothing relevant because there isn’t a joint every millimeter. In effect, you have terabytes of data, and only a small part of that will be applicable.

Siemens Mobility has a cloud application that enables rail maintainers to survey the joints through a user interface. The workflow is as follows: there is a bucket with terabytes of images, and we have a couple of containers that first detect the joints, and next classify whether it's damaged or not, and what type of damage it is. Data on damages, amongst other data, is stored in a database. The user can go through a map, click on the joints flagged as damaged, and look at the specific information.

A critical part of this process is the feedback loop with the user. Our users have a wealth of domain knowledge but perhaps are not familiar with the data requirements in Machine Learning. Before you go into development, you need to find the right way to get their input and how they wish to interact with the system. After you've deployed, you keep checking how the user interacts with your application.


Use case 2: Signal recognition for autonomous rail

The second use case is signal recognition. Regardless of whether we're talking about city trams or mainline trains, the signals are scattered along the route. At its core, this is a location and classification problem. However, the devil is in the details.

In this use case, we know that we cannot rely on a data link to the train. We need a computer on the train. Moreover, the prediction needs to be in real-time, and it has to work at all times, whether the temperature is +40 C or -40 C. The use case requires incredibly robust hardware on the train.


Considering differences

Let’s consider some of the resulting differences in using machine learning for these two use cases. In the case of rail track inspection, we rolled out on a cloud platform, ran on CPUs, and used the Python stack. We use S3 buckets, and because the latency requirements aren't strict, we can work in batches. If it takes a little longer, that's quite all right. Also, there's no competition for resources with other processes.

With signal recognition, the biggest concern is safety. The solution must be stable. The hardware needs to remain steady at all times under all temperatures, which already conditions the kind of GPU you can use – or even if you can use a GPU at all. There is special legislation, the development of which for autonomous driving is in the early stages only. And there is the real-time requirement.

What from a pre-production perspective looked like two very similar machine learning problems – both are ‘detect & classify’ – changes very much from the customer’s perspective and the solution’s deployment.

In sum, how you deploy shapes the whole product and also affects the original machine learning algorithms.


 

About #datalift

Organized by the AI Guild, #datalift is about data analytics and machine learning use cases in production.


From September 2020 to June 2021, we hosted Season 1 entirely online. The #datalift No 1 to No 5 events had 40+ AI Guild members showing best practices from 12 industries to over 5.2k registered attendees.


We are now in the middle of Season 2, which runs from November 2021 to July 2022 and is a hybrid experience: Online + 3-day Summit.


Invitation to #datalift summit




#datalift Summit is in Berlin from 22 to 24 June 2022.


The confirmed speakers and partners are up on the main page www.thedatalift.eu


Get your Early Bird ticket with a discount until 31 January!



5 views0 comments

Recent Posts

See All

It is often estimated that only 10-20% of the use cases make it to production. Do you know if this is true? There is anecdotal evidence that use case productionization is difficult, as is keeping them