Tata Motors Autonomous Vehicle: Function Development and Testing
Mark Tucker, Tata Motors
UK Autodrive is an ambitious three-year project that is trialing the use of connected and self-driving vehicles on the streets of Milton Keynes and Coventry. As part of this work, Tata Motors are developing autonomous (self-driving) cars. The autonomous trials began on a controlled test-track environment, before moving through progressively complex urban scenarios, culminating on the streets of UK cities.
In this talk, you will learn what it takes to develop the complex control systems required for an autonomous vehicle, and how Tata Motors have used Simulink®, Robotics System Toolbox™, and Simulink Real-Time™ to develop the algorithms for trajectory planning and motion control and deploy them into the autonomous vehicles for testing.
Recorded: 3 Oct 2018
Thank you very much. So quite quickly I'll nip through who we are. The TATA Group you possibly heard of. Particularly in India, it's a very big group. It's actually multinational. You may have heard of some of the brands over here, which include things like TATA Steel, TATA Communications, TATA Lexi. TATA Motors is part of that. And we are a part of TATA Motors.
So TATA Motors is quite large in terms of its commercial vehicles and also passenger vehicles. It also, as many of you know, owns Jaguar Land Rover as well. And we're part of that being TATA Motors European Technical Center. And our role originally was basically to take technology from Europe into India.
So in the Indian passenger vehicle market particularly there was no indigenous Indian brands. It was getting technology there so they could develop and build their own vehicles, which they are now doing. They're also getting lots more imports. So the market has changed an awful lot in India. But we're there to support the parent company in India.
We're actually based in and around Coventry. We've got a workshop. We've got a design studio. And we've also got the offices on the university campus. For those of you ever go to the campus, we're now actually going to be moving into this new building on the left. It's actually over here late. But this is now grandly going to be titled The National Toyota Innovation Center. We're going to share that with Jaguar Land Rover, Warwick Manufacturing Group, and other players as well.
So that's who we are. This talk will concentrate on autonomy. But I just wanted to sort of set the scene fairly simply that there's a difference between ADAS and autonomy. ADAS, as the name suggests, it means supporting the driver in some functions of the driving. And a lot of these functions are already available in cars. Things like adaptive cruise control, lane departure warning. So you get some additional assistance to the driver being longitudinal, lateral, and motion of the vehicle.
Whereas autonomous vehicles, it's more about taking away some of that response of the driver and taking over the task of lateral and longitudinal control. And to do that, it needs to do sensing of the environment as well as of the vehicle in order to control the vehicle. Quite grand SAE levels of autonomy. I know that’s very small text, but I go through it very quickly.
Essentially, the first three levels are all to do with driver and driver assistance. So the first level is all about the driver is fully in control. We then get on to level one driver assistance, where some of the tasks are taken away from the driver. Or he's supported in some of the tasks. And things like adaptive cruise control whether longitudinal driving of the vehicle is supported, or things like lane keeping assist, or lane departure warning, that's all supporting the driver. So that's in the longitudinal. It's also the same in the lateral.
When you start to get into the autonomous region, that's where we start doing more of the actual, if you like, the processing, and the working out of how to actually respond in a particular scenario. So what we've really got here some the called conditional automation. So this is a case where the system is taking over the vehicle. But the driver is expected to take back the control in certain situations.
As you advance that, we go into things what we calling things like Highway Pilot or City Pilot. In those scenarios, the vehicle is fully in control. You don't need a support driver. But it's only in particular scenarios. So very specific scenarios. Such as just on the highway, for example. But that all leads up to the level five, where the full support is given. Full control is given to the car. So there's no driving required. And there's no driver controls to take over control.
Another example, which is quite a good one, is in parking. And we've seen obviously level zero. The driver is fully in control. You then had some very different levels of support being given to the driver. Simple things like surround view. Then you've had some various automatic parking functions. And eventually, that's going to lead on to things like valet parking, where you just turn up potentially, and hand over the car, and the car and go and park itself. So that gives you a kind of a very good example of a wide range of different SAE automation levels.
And this is all leading up to this level five full automation. Now there's some quite key players looking at the moment. You've probably heard a lot in the news about people at Google, Uber. They're all developing a lot in that. They're the sort of big players that are trying to get it out onto the highways fairly quickly. What we're doing at TATA is quite different. And we're a bit more niche than that. So I'll try and tell you what we've done and how it's kind of unique.
It's also worth setting the scene. Why is there so much attention going on in autonomy? All that said, the technology is very interesting, from my perspective. It's great to be working on this thing. There really need to be some sort of benefits. And some of the key ones are in the societal benefits. Essentially, in terms of safety, a large percentage of crashes—90% of crashes—can be attributed in some way to the driver. So the driver, effectively in a vehicle, is one of the most uncertain systems.
I know we get different road conditions. There's various wear and tear on cars. But effectively, on a new car, in good conditions, you can get a driver who can be from one extreme to the other. You can get older drivers, younger drivers, you get those that impaired in some way. Some whose capability is less. And some who's just have got different attitudes of driving. They much be more docile. Some might be more aggressive.
So this would effectively—the autonomous driver would kind of level the playing field a lot in that respect. Not just in terms of safety, but in terms of how we actually use the roads that could be great benefits. I'll talk a bit later about some of the trials we're doing in Milton Keynes in a moment. You drive to Milton Keynes. Go down the M1. I do. There's often congestion. There's often difficulties in parking.
If we had fleets of cars that were autonomous, some of these problems would be reduced. And how you actually set up the cars, you can set them up more efficient in some way. Be it things like air quality. Other factors that are changing at the moment; one key thing is that a few years ago, the percentage of people living in the countryside to the cities changed. So the need for people to have their own cars is changing. So I live in the countryside. I can't get very many places without a car. Public transport is not very good. But more and more people living in the cities where the requirement is different.
How we actually treat cars is changing a lot as well. So car ownership potentially could change. It may not be an issue of car ownership in the future. There's a lot of talk of how mobility is a service where basically if you want to get somewhere, a car is one option or one part of the solution for getting you from A to B.
Okay. So just like to quickly tell about the UK Autodrive Project. This is extremely relevant at the moment because we've reached the stage at the end of the project where the current two weeks we're doing lots of filming and lots of demonstrations to VIPs. So this project has been running for a few years. Three or four years now. And as with funded projects, it's a collaborative program. So in terms of vehicles, we've got the RDM pods, which you've may have seen on the news. They're running around Milton Keynes. You've got autonomous vehicles. So I'm going to talk about our autonomous vehicle. JLR also got autonomous vehicles in the project.
And there's also been a connected element where both ourselves, JLR, and Ford have been working on V2x. So a means of communicating vehicle to vehicle and vehicle to the infrastructure. Other partners in the project are listed beneath. But that's including support from different councils, legal entities, and the like. But I'm just going to concentrate more on the technical aspects.
Let me first go through our vehicle. So this is our vehicle. This is a TATA Hexa. As I said, TATA passenger cars largely supports just the Indian market. So they're generally quite different cars you might get in Europe. So it's very much aimed at India where the price is a very key factor. So that's the type of car that we're making and selling. So the numbers aren't as great as some of the international sellers because the market is quite restricted.
So this is one of the TATA vehicles. It's called a Hexa. In this particular variant, it's a six-seater. It's got three rows of two seats. And as you see, we've removed the rear row put in processing. So I will talk about the four elements of the car in terms of what we've changed. Now one of the key ones is the sensing. So we've put a number of sensors on the vehicle. Go through these in a bit more detail in a second.
And as with lots of these autonomous systems, one of the next blocks to talk about is the perception. So the sensors give us the raw data. But that raw data needs to be processed in some way to give us something useful that we can actually use as part of our—for example, to detect objects. To maybe classify the object to plan the route. From that raw data, we need to do what we call perception, and extract that data.
We've then got a block which is called planning. So we've now got information on the road and environment. And we need to plan our route from where we are to where we want to go. There's three stages to that. And once again, I'll go into that bit more detail about the different levels of planning. And once we've got a plan, we need to then control the vehicle. And so that's the final block. So that’s a fairly high level of view of the vehicle.
Now one of the interesting aspects in how we have done it is that we're quite a small team. But we've got some traditional automotive engineers. MATLAB Simulink users who have come in. And they're very familiar with the automotive control. So this is the right-hand side on this diagram. But we've also got some computer scientists, people who've worked on robotics, people came from the University of Birming, but very much based in Linux, C++, and Python.
So we've got the two different areas. And the way it ended up splitting is that a lot of the sensing and perception has been done in those environments. And they've interfaced with the work being done in MATLAB and Simulink in the control and the planning. So I'll talk to a bit about how technically we've merged that together. And maybe a few of the issues in doing that.
One way of joining these together, I should mention is key to our system, and I'll go into a lot more detail in a minute. Is using what we're calling is the robot operating system, which is supported by MATLAB. So it's an easy means of communicating between the C++ and Python code on the left, and the MATLAB windows on the right. We're also using some called PTP, which is our means of synchronizing the time. So that's called the precision time protocol.
Once again, that's a standard. An IEEE standard. And that's supported on both sides as well. So that's another means of us integrating these two very different ways of solving the problem. And we're using CAN. I mention that for completeness because that's how we finally integrate with the vehicle. So just briefly, just to mention some of the sensors. I haven't got the best the best pictures of the car and the sensors. But effectively, we've got a number of radars. Short range and long range. We've got laser scanners. We've got what they're calling 2D laser scanners. So we've got six of those positioned around the car. And they give about 130 degrees of field of view.
So if you merge those together, you get a full 360 degrees around the vehicle. We've then got 360-degree LIDARS as well. So they're full rotational scanning ones. And that just seen very much on the left side of the screen there. We've got a number of cameras for reference. We've also got a lane detection and an object detection camera. We've also got GPS. And you can see one of the antennas there in the middle of the screen there. A bit like a disk.
Got two of those, two antennas. One front and back that gives us direction. And we've also got information coming from the base station to give us the differential aspect of GPS to give us better accuracy. The little picture to the right, there what you can see, it's not clear. That's meant to give a point cloud representation. That that's the output you might get. The raw output you might get from one of the laser scanners.
So that that's the sensor suite that we've got. As I said, we've taken at the rear two seats, and we've put a rack in there. So we've got a number of processes. So we've got a lot of our processing, in particular on the sensing side, is undertaken in these industrial pieces. There's four of them you can see there. And that's all done in a Linux-based platform. You'll also see the blue on the right, that's the Speedgoat box. And that's the box that we interface to the vehicle to control our drive-by wire system.
And also in there you've got things like boxes that fused together. The LIDAR information for example. The GPS processor and the like. And then finally, we've got the drive by wire system. So what we've done with this, we haven't interfaced directly with the engine management or the brake system. What we've done is just put an overlay that with actuators to basically do what the driver does. So I've zoomed in a bit here. So we've got some two motors that go on the steering column. We've got a piston that pushes down the brakes. And then we bypass the electronic throttle.
And we've also got, as you can see in the middle there, we've got a box with a big emergency stop button. So that gives full control back to the driver. And it also gives a few other bits of controls. It's got the gear shifter on there, the handbrake, and the like. We can also control the ignition, the windshield wipers, various auxiliaries. So as we build that functionality, we can incorporate all that into our system.
This system is actually based off a mobility solution. So we actually went to a third party for this. So some cars, for those that can't maybe turn the steering wheel, they'll get a joystick fitted. Well, we've used the same system. But rather than have a joystick, we've in fact have a CAN interface so that we can control the various controls. But we can always revert back to full manual driving mode so that we can do drive between sites. And also when we're testing, we can immediately revert back should the situation arise.
This is quite a long slide. I'll try not to be too slow on it. But I've already mentioned that the high level. How we've got the perception, the planning, and then the control. These are all joined together using this ROS bus. This robotic operating system idea. So what I really wanted to show was where the different bits of processing come in, and how we've actually made up our system.
So for example, here are the first two things are some of the sensors. The raw sensors. That could be the LIDARs, or the radars, or the GPS. Slam is put on there. That's actually effectively a bit of processing that's already done. So SLAM stands for simultaneous localization and mapping. So that's effectively where an object is detected, and you can therefore effectively stop creating a map of where these objects are. But at the same time, you're also localizing yourself with respect to those objects. So it's a simultaneous process of creating a map and positioning yourself with respect to a map.
So that's quite a nice technique. But that's being done on a lot of the LIDAR data. So that then goes into our ROS bus. And we then we'll take some of that information and put it into various processes. So here, we're talking about things like sensor fusion. Sensor fusion, we've got various benefits. In this case, we're using sensor fusion on the radars. Sorry, on the LIDARs. The six that go around the car fusing together to make them as if they were one sensor. But you sensor fusion to give you some element of redundancy, or actually in terms of maybe some sort of gaining some synergy. You might use a radar to give you good range information. You make of a video image to give you good lateral information. You could fuse those together to give you a much better picture of where an object is.
So we've got sensor fusion going on. We've got point cloud processing. So that's where we get all the dots from the LIDARs. And we got a lot of points because on the 360-degree LIDARs, we talk about 300,000 points need to be processed from that to get something some useful information. But the processing goes on, and information can come back to the bus.
We've then got the various planning functions. I'll talk a bit more about the different planning functions at the moment. But essentially, there's a three-stage to it. There's the global, the behavior planner, and then the trajectory planner. Those are also utilize some offline process map. So we create a map in advance for this application. Then there's some utility functions. So we've got various screens that interface directly with the ROS bus. We have control GUIs. There's very nice bit of software called Arvis, which is I think that's a free bit of software. But basically on that you can layer up different information from different sensors. So you can visualize what's actually being detected as you go along.
And we've also got to watchdog. So with that, that's kind of an extra safety feature in our vehicle, which we then use to detect any faults in the system. And revert control back to the driver as appropriate. One thing we have then got is we take the information from the ROS bus. Now the Speedgoat box at the moment is a Windows-based box. Or DOS-based box, I should say. ROS at the moment is a Linux-based box. So we can't actually directly use that. ROS too is going to be portable onto DOS-based system. So hopefully we can use that. And ROS too will hopefully also have real-time capability.
But for the moment we can't do that. So we've got another interface there which we're calling the ROS bridge. So it takes the Ross information. Or it takes, it reads, it sends it straight back out to our speaker box, which is there. This is all I'm talking about. Things like ROS. It's actually a middleware. And it's actually all being communicated from ethernet. And the same with our ROS bridge. We're just going to feed that ethernet straight into our speaker.
And our final control is to take the controls that we want to the vehicle—and that goes to our drive-by wire system. And that reverts back to sort of more traditional cam. So that's our system as we've got it. So I really wanted to highlight how we're using ROS to join everything together. One other thing to say is it's a bit of a mixture of software where things have been written.
So we've got the outline of the boxes where they're blue. We've largely bought those functions in. Where they're green, we've actually done them ourselves. But we've done them those in Linux or C++. And where they're in purple that's in MATLAB and Simulink.
So the interface, the motion controller, and Speedgoat we've done in MATLAB Simulink obviously. We've also done a lot of the trajectory planning in MATLAB Simulink as well. But that's then embedded in the Linux environment. So we've kind of got the extremes of pure MATLAB Simulink embedded in a real-time processor. We've got a mixture of both MATLAB Simulink in Linux. And we've got pure Linux processes as well.
So I said I was just quickly go through the levels of the planner just to give you a bit of background of how we're at least doing. But it's a fairly standard approach, I believe. So the planning is in three stages. We've got a global planner. And I've just drawn a very simple map. That's a map of concrete from the station to the university. That's no different than saying, I want to get from A to B, what's the best route? So that's a fairly high-level plan.
From that high-level plan, we then do what we're calling a behavior planner. So that's actually takes the plan and we've then got localized maps of the route. And we've actually split them up into what we're calling lanelets. And each lanelet has got some behavior associated with as well. So the standard behaviors is keep in lane. But we might then have something like take a left turn or stop at a traffic light. So those are the behaviors that we then need to plan to meet.
The ones out into the map of what we calling static because they don't change. And then what is part of strategic planning anyway. There's also dynamic planning that needs to go on for obvious things like obstacles becoming apparent. The next part of the planning is then work out the trajectory. So we actually compute a number of candidate profiles. And we then will select the most appropriate ones.
So initially, we plan a path. So that's very much a spatial path in terms of position. That then becomes what we're calling a trajectory. And the trajectory, the difference is then we've now there are some temporal information. So in fact, we've now add it on. We not only want to be at position xy. We want to be a position xy at a given time with a given speed.
And from all those candidate trajectories we will then put them through various criteria, various tests to select the most appropriate one. And that will be to do with the most accurate in position, or avoiding obstacles, best comfort for the driver, and the like. There's a number of criteria. I've listed a few of them there. I'll then talk a bit more about the controls.
So those planning algorithms are being done in MATLAB Simulink. But being embedded in Linux. These control algorithms are being embedded in the Speedgoat. I just put three examples here. The precursor to this project was where we had one of the small Indian delivery vehicles. It was called a TATA Ace. And they electrified one of those with lead acid batteries. And it's very simple autonomous testing on that in the previous project. And that used something called pure pursuit. Pure pursuit is basically saying, I know I want to go around that bend, you choose a point. And you basically steer to that point, and you keep updating it. That's fairly simple. It does have some limitations. You've got to be careful in terms of the tradeoff between accuracy and stability. But that's a very simple one. For low-speed maneuvering, that's absolutely fine.
You can compare it a bit to if you like being towed on a long tow bar. And if any of you've had the experience of getting a trailer and stable, you know this it's very similar in this situation. It isn't always stable. So you've got to choose your look-ahead point effectively to match. Effectively match your speed. So the faster you're going, the further ahead you need to look for stability. But that means you would end up cutting corners. So that's got its limitations.
Another method we've looked at is lane keeping. This is where basically you look at the trajectory ahead, and you fit a curve to it. And if you fit a simple quadratic in this case, quite elegantly the coefficients of that quadratic give you a position in the lane. Your head in your lane. And gives you a function of the curvature. And those can then directly be used in simple control loops. So you might have a simple proportional gain on those different control loops.
That's great for lane keeping applications on motorways. So particularly you're going faster, that's particularly good. So when you got very shallow curves and the like. The trajectory tracking is the method that we're actually using. And that's really based off the model predictive control method. We use the tool box from MATLAB for that. So what I've shown here is that the model is very key to that. The model we embed is key to how we actually do the performance. These are just to show the very high-level model.
Effectively, the model we use in the middle, which this is the actual real-time implementation. But the same model we then use in offline simulations, and also in playback data. So it's very nice to just be using the same model on different bits of data for analysis and testing.
If anybody wants to talk me afterwards, I've got quite a few interesting comments about model predictive controller design. I think Mark's talking this afternoon. He'll also look at those. I find an interesting technique to use. It's quite easy to use a MATLAB Simulink. But there's a few— not a lot to do with the implantation. But more to do the method, which are quite interesting.
Another key issue. I mentioned using PTP. Precision time protocol. One of the key issues we've found in this is because you've got a number of processes working, we get a lot of data. And it's all aged in some form. We haven't got a time stamp where everything is published at this time. So we have no idea. So effectively the ROS bus, you get a time stamp associated with the data. And the graphs I've shown at the bottom show that I different messages in the system. You get lots of different delays. They vary. They're fairly cyclical. But they do vary. And so to correct for time delays, which is quite a problem, we've had to use PTP. And then feed that into the algorithms.
Data logging and visualization is quite key as well. So we're doing all these test runs. We're doing all these simulations. But actually being able to visualize how well these algorithms work has meant that we've had to kind of create some new means of visualizing it. So fairly obvious things like on the left is we will get our GPS tracks and overlay them on a map or a satellite image. But on the right, we've actually found that actually if you overlay in this case, the speed plot your trajectories you can actually see how well your planned speed trajectory was compared to your actual.
So in this case, on the left the black is the actual. And the green is the desired. And you can see they match up quite well. It's only representative. It doesn't give you an exact because at any point in time, the trajectory might change because an obstacle comes along. And therefore, you might want to do something different. But it gives it a good flavor of how well you do it. So we've designed these tools largely for offline.
But interesting listening to Mark earlier. If these could become online tools, they'll be much more helpful to us. And here's just more visualization tools. Some of these are from Arvis at the bottom. Point cloud processing. All these are things that we've got different elements within our system. And in some ways some are on the Speedgoat. If they could be combined in the online and offline, that would be particularly useful.
I tried to select some interesting bits from the various testing we've been doing. This has been done in Coventry at the end of last year. And some of it in Milton Keynes. At the moment, we are actually doing the filming. Today, I'm here. But the BBC are filming us. So we might get some of this on the news in the next few days.
You spot the obvious mirror here turning left. So clipped in turning right. My apologies for that. But my job wasn't as film editor. I just wanted to show some of the clips of it actually functioning. Started this function development. But in the end, I think it's really highlighted that we've just taken a pragmatic approach to developing some autonomous vehicle functionality. We're a very small team doing this. So we've utilized off-the-shelf tools. And I've listed a number of those there.
Off-the-shelf hardware. And also utilize where appropriate bespoke third-party software. Some of the key algorithms though we've done ourselves. And that's kind of what glued it together with things like ROSS, PTP, the planning algorithms, and the controlling algorithms.
Just leave with a quick thank you a very small team. There's a few of the names mentioned there. I've also had a bit of help from GianCarlo. Particularly with this presentation. So I want to thank him for that. And I know I've kept you from your lunch. But if you'd like to stay behind and offer any questions, I'd be totally happy to take any.
Related Products
Learn More
Featured Product
Simulink
Up Next:
Related Videos:
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other bat365 country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)