Brampton Bad Credit Car Loans

Brampton Bad Credit Car Loans will get you into the vehicle you deserve, even if you have bad credit, no credit, or have declared bankruptcy.

Get Started

Fill out the form below to get a FREE quote.

I agree to the Terms & Conditions

Cars could start communicating with city infrastructure

How Do Autonomous Cars Work?

You’ve no doubt heard about autonomous cars and how they’re the future of driving. While people have all kinds of interesting ideas about what self-driving is, the reality is of course quite different.

Instead of being like movie and television show depictions of autonomous cars, the real-world version isn’t quite as smooth or dramatic. In addition, the technology is still being developed, so what self-driving cars are will shift quite a bit in the process.

The way that autonomous cars work is somewhat complex, but it isn’t impossible to understand. Just remember that there are many systems working together, and improvements to these systems are constantly being introduced.

Several Levels

Most people think that all autonomous cars are created equal. They fail to realize that autonomous comes in several different levels. Technically speaking, you might already own a car with some self-driving capabilities, even though you wouldn’t consider your ride to be autonomous.

To really understand how autonomous cars work, you need to have a working knowledge of the different levels of autonomous driving.

Not all autonomy is the same

Level 0

You definitely already know Level 0 since this involves absolutely no autonomous drive capabilities in the car. As the human driver, you’re completely on your own for spotting other vehicles, pedestrians, animals, inanimate objects, and anything else in any direction.

Level 1

The first level with autonomous drive assistance, the driver still maintains at least som control of the vehicle at all times. The autonomous system can take over maintaining the car’s speed or controlling steering, but not both at the same time. This also can only happen under very specific circumstances, such as on a highway with a physical divider between oncoming lanes of traffic, plus clear lane markings.

With Level 1, the driver absolutely has to pay attention to what the car is doing at all times. If the autonomous systems fail, the human driver must take over those duties immediately. A good example of this capability is adaptive cruise control, which adjusts the car’s speed according to how far ahead other cars are traveling in the same lane and their speed, maintaining a safe following distance.

Level 2

When a car uses Level 2 autonomous drive features, it can steer, brake, and control its speed in certain situations. The driver still needs to be paying close attention to everything the car is doing and be ready to take over in an instant if the system needs help or fails. In addition, the driver must engage in more advanced moves like changing lanes or deciding when to stop at a traffic light.

Most Level 2 autonomous drive systems require the driver to periodically place both hands on the steering wheel, proving that he’s still paying attention. Also, this kind of system can only operate correctly in certain road conditions, usually a divided highway with clear lane markings. There are many examples of this kind of technology, like Tesla’s Autopilot and Cadillac’s Super Cruise.

Level 3

Cars with Level 3 autonomous drive capabilities can take over pretty much everything involved with driving, including monitoring the area around the car for dangers. Once the car does encounter a situation it can’t handle, it will alert the driver to take over.

There currently aren’t many Level 3 systems available to the public, although that should be changing soon enough. One current example is Audi’s Traffic Jam Pilot.

Level 4

Often referred to as high automation, Level 4 is more like what most people think of when the term “self-driving car” is used. The car is able to drive without human input or supervision, but only in select conditions. This might mean this technology only works in certain areas that are well-mapped or have well-maintained roads, or only when the weather conditions are ideal, etc.

If the car encounters a situation it cannot negotiate, it has the ability to safely exit the road and wait for the human driver to take over or for help to arrive. Because of this capability, humans could actually sleep while riding in the car and not pay attention to the road at all.

No commercially-available cars have offered Level 4 autonomous drive capabilities. The Google Firefly pod car used this technology, but it could only travel in a restricted area and not exceed a certain speed.

Level 5

You will hear Level 5 often referred to as full automation because this means cars can drive just as well as humans, if not better. It can navigate any road in any set of reasonable conditions without problems. The people riding inside only need to specify a destination and that’s it.

Technically, there are no Level 5 autonomous drive systems in existence today. Some experts argue this technology isn’t attainable while others contend it is, although it might be quite some time in the future before we develop it.

Many Sensors

You have the ability to experience your surroundings through the five senses: sight, smell, touch, hearing, and taste. An autonomous car doesn’t have the same capabilities as you, but it must copy some of them in some ways if it’s to successfully navigate city streets.

You can throw out the touch, taste, and smell senses for driving, because they honestly don’t do much for your driving skills. But the other two, sight and hearing, do play a factor. While the ability to hear isn’t yet something autonomous cars can replicate, sight is. Several different sensors provide the car with the ability to spot obstacles as well as lane markings and road signs. Each one has its own set of advantages and disadvantages, which is why many autonomous drive systems make use of more than one sensor type.

It’s impossible to stress the importance of sensors in autonomous cars enough. To open up full self-driving capabilities, cars will need sensors which exceed humans’ ability to perceive their environment. Otherwise, it will make more sense to have a human driver behind the wheel.

Ultrasonic

An ultrasonic sensor actually imitates how bats navigate caves and other areas in complete darkness. The sensor sends out sound waves, then monitors for echoes generated by the sound waves hitting objects. Based on how the sound echoes back, the sensors can determine how far away an obstacle is from the car and possibly the size of the obstacle.

The catch with ultrasonic sensors is that they work best for the area immediately around the car. That means these sensors are ideal for low-speed driving, like when angling into a parking spot, and not for driving at speed on roads.

While limited, ultrasonic sensors perform their job quite well. They already function in assistive parking systems found on some cars today, particularly luxury models.

Image

You know them as cameras, and they’re in familiar devices like your smartphone. Autonomous cars can be outfitted with multiple cameras with varying picture quality capabilities, pointed in all kinds of directions. Since the car can’t turn its head, this helps to provide vision ideally all the way around the perimeter of the car.

Stereo cameras are a special type which achieve 3D vision. This is a critical component since the human eye does the same thing, allowing for depth perception. Otherwise, the car must use machine learning and translate 2D images into a 3D space, which allows room for error, possibly leading to a critical accident on the road.

We’re already familiar with how cameras work

Cameras have some other capabilities which make them essential for any autonomous car to use. For one, cameras can read traffic signs and show what signal is illuminated on a traffic light. Some cars today use cameras to help the driver know what road signs have been passed, in case they missed what they said. Cameras also determine the exact location of lane lines on the road, a critical component for lane-centering systems.

If other sensor systems fail, cameras can act as a backup. While some camera lenses might fog up or become covered with rain or snow, new lens technology actually prevents these types of water-based obstructions. However, mud and other grime can obstruct the lens, making washers to clear the surface necessary. There’s still the problem of image sensors not being able to detect objects through thick fog, snowfall, rain, etc. just like the human eye. This means in especially poor weather, these sensors become essentially useless.

Another big limitation of cameras is range. To be truly effective, they need to have a reach of at least 250 metres, otherwise the car can’t properly anticipate obstacles down the road.

Finally, while cameras might provide a clear video feed of what’s going on around the car, that doesn’t mean the autonomous drive unit knows what to do with all the information. Recognition algorithms struggle sometimes with identifying objects such as pedestrians and animals, so that problem needs to be corrected.

Radar

You probably think of ships and planes when it comes to the use of radar, but it’s become a way to help cars navigate roads. Police have used radar sensors for some time to tell how fast another car is traveling, so the technology has been developed fairly well over the years.

Radar sensors send out electromagnetic waves to a designated area around the car. Once those waves hit an obstacle, they reflect, showing how quickly the object is traveling and how far away it is from the car. There are both short and long range radar sensors, which of course provide coverage at varying distances.

Several sensors can be mounted on the car, providing coverage in every possible direction and at varying depths. Since radar sensors are relatively inexpensive, the car can even have more than what’s needed, creating redundancy in case one or more sensors fail.

The big shortcoming of radar is the fact that 2D radars only scan horizontally and not vertically. That means they can determine the footprint of an object, but not its height, limiting their usefulness. There are 3D radars being developed which would solve this issue.

Lidar

Also called Light Detection and Ranging sensors, Lidar is an advanced sensor type which you probably have zero experience with. The sensor uses an invisible and completely harmless laser to scan the area around the car. From that scan, Lidar is able to measure ranges of objects all around, providing a 3D image of the entire environment you’re traveling through.

When this information is combined with images from cameras, Lidar is excellent at helping cars properly identify all kinds of obstacles as they emerge into view. That allows the car to react appropriately in the ever-changing streets.

The big catch with Lidar is cost. Thanks to the use of rare earth minerals in Lidar sensors, they’re quite expensive to manufacture. Researchers are trying to find a lower-cost way to create Lidar, while also working to improve their imaging capabilities by flashing lasers instead of using a constant beam.

V2X

Data sharing could be a huge part of effective autonomous cars. Currently in use at limited locations, this technology allows the car to communicate with other cars, infrastructure like traffic lights or bridges, and even access the cloud to enhance its understanding of the environment around it.

Through V2X communication, your autonomous car would “see” another car approaching from the opposite direction, even though a hill is completely blocking its view, because the two are communicating their positions to each other. Your car would also know when there’s an accident in the area, road construction, or congestion, so it can choose an alternate route.

Cars could start communicating with city infrastructure

With the cloud, your car would have access to a huge amount of data being collected by sensors on other vehicles and parts of the city’s infrastructure. This means your car could be monitoring routes from miles away, effectively expanding the imaging capabilities of the autonomous tech by quite a bit.

For this tech to really be effective, a large group of cars needs to be connected to the cloud and feeding it with information. That means early adopters of autonomous cars won’t enjoy this benefit until more people start participating. In areas where infrastructure isn’t outfitted with sensors like cameras or radar, information will be more limited. Also, on rural roads V2X will have less usefulness versus in a crowded city.

AI

As you’ve probably guessed, onboard artificial intelligence must process all the information gathered by sensors and V2X communications. It makes sense of everything and not only interprets what’s going on around the car, it also decides what’s the best course of action to take in any situation. To reach Level 5, the AI would need human-like reasoning capabilities.