Driverless technology will revolutionize the automotive industry

2019-09-17 10:25:45 贺文峰 221

Driverless technology will revolutionize the automotive industry

The day of unmanned driving will come. After that, the safety of traffic can be greatly improved, and the automotive industry will also undergo subversive changes. Maybe it will be achieved in 2022.

The concept of artificial intelligence has been around for a long time, but it was not particularly good before, because people used to educate the machine before letting the machine judge for people. Until the deep learning function that has existed in recent years, artificial intelligence has entered a stage of rapid development. Deep learning algorithms have become the cornerstone of driverlessness. Using deep learning to make decisions and perceptions will quickly improve the ability of driverless. Above the human driver.

Unmanned R&D, from the depth of learning in the degree of unmanned applications, there are two different genres. A traditional car manufacturer who began researching autonomous driving technology more than a decade ago, its main purpose is to help people improve the safety and handling of cars through active driving. The second is the Internet, represented by Baidu and Google, using deep learning to study driverless driving.

From deep learning to unmanned applications, there are also two options: one is a camera-based solution, such as mobileye, they are the world's largest supplier of advanced driver assistance system systems, not long ago, Intel spent 15.3 billion The dollar acquired the company. They have done very good deep learning in the field of sensors, but they have not applied deep learning in decision systems and can only be used for semi-automatic driving. There is also a self-driving technology research and development company that is worthy of attention. Drive ai, they are more aggressive, the signal output directly from the sensor becomes the decision of automatic driving, which is equivalent to giving the vehicle a car to understand The environment and the brain that travels safely. At present, such schemes are already being verified, and only a large number of road tests are needed to prove the reliability of the technology.

The reason why driverless driving may be realized within three to five years is mainly based on the breakthrough of these technologies.

First, the algorithm of deep learning has become the cornerstone of driverlessness. Using deep learning to make decisions and perceptions will quickly boost the ability of driverlessness to human drivers. For example, last year Alpha Dog helped educate many automakers outside the high-tech industry. Leadership, even chip maker companies. For example,Thread has provided the data and messages for most of the automotive sensors for China Automotive Research to study the vehicle control strategy of hydrogen energy vehicles.

Second, the replacement of sensors and the upgrade of sensor fusion technology. In the past two years, sensor technology has made a breakthrough, making unmanned driving a deep and very good breakthrough in deep learning and new sensors.

Third, the hardware upgrade, to create a cloud car brain. When the car brain has such computing power, you can put the learning algorithm and the model into the car, and then take some real-time judgments and decisions in the car's own driving process. The key to this is the basic decision-making. He judges many objects through the camera and lidar perception. This judgment uses some data from the Thread to make some comparisons. The comprehensive algorithm takes a look at the decision in a large amount of data. The accuracy.

Fourth, driverless technology also requires a very important capability, that is, data collection. Commonly known as sample collection. If a company is doing driverless driving, how big is the unmanned team, because the size of each team will affect the quality of his car, that is, the size of the data collection, the sample is enough, big enough, The more accurate the algorithm for deep learning. At present, with 4G network, most of the data and camera photos can be sent back to the cloud for a large sample collection. The models involved are different, and the collection methods on the bus data are also various, then, driverless. The data acquisition terminal will also receive great attention from the industry, because the data collection ability determines the deep learning.

Combined with the development of project background and blockchain technology, it is extended to the fields of automotive intelligence, big data, and internet of things. With the open attitude of the city and many top talents entering the field of intelligent driving and artificial intelligence, the differences in many core technology points It has gradually been leveled, and the gap in absolute technical level is shrinking, and the ability to solve practical problems will become more and more significant. The core of this is data.

Looking at foreign countries, Tesla is the most advantageous in this respect, because it is all the cars equipped with Autopilot (the world's first mass-produced assisted driving system), allowing users to accumulate data for themselves in real life scenes. . At present, Tesla has sold more than 200,000 vehicles, most of which are equipped with Autopilot hardware.

In July, Tesla executives announced that the global Tesla electric vehicle has traveled more than 8 billion kilometers. According to foreign media reports, even if the car owner does not start Autopilot, it is still in the "shadow mode", the sensor will capture the data, the autopilot algorithm will make its own judgment in the background, but it will not really control the car. Every Tesla electric car has networking capabilities. Recently, Tesla even asked the owner to agree to upload the video data captured by the car camera.

Tesla CEO Elon Musk wrote in his Blueprint Project Part 2: We expect to accumulate approximately 6 billion miles (nearly 10 billion kilometers) of autonomous driving mileage with regulatory approvals around the world. At the moment, the consensus reached in the industry is that if you want to improve the algorithm, you must fill it with a huge driving mileage. Don't run enough miles, don't know how big the world is.

At present, the mileage of self-driving vehicles has just exceeded 3 million miles per day (nearly 5 million kilometers). Therefore, Google will send various teams such as Prius, Lexus, Koala, and Chrysler to the road test for many years, and then Uber will eagerly launch the autopilot test fleet in Pittsburgh and San Francisco, even at the local car management in California. The bureau tore the face.

 

The international trend is so, domestic manufacturers are naturally not far behind. In April 2016, Changan self-driving cars completed the 2000km highway road test; as early as July 14, 2011, the red flag HQ3 unmanned vehicles lasted 3 hours and 22 minutes. Completed the high-speed full-range unmanned driving experiment from Changsha to Wuhan 286km; in June 2016, the first unmanned demonstration base in China opened in Shanghai Jiading International Automobile City, and the first unmanned driving evaluation base in China will also settle in Tongji. University Jiading Campus. At the same time, SAIC and Tongji University have officially signed a cooperation agreement to build a test base of more than 1200 acres. It is expected to be the first unmanned certification system in China.

From Li Yanhong sitting in the unmanned jeep free light, the five-ring all the way to the Beijing International Convention Center has released a core signal: to do a demo car running under ideal conditions may not be difficult for many teams, but To solve the problem of how to deal with autonomous vehicles under extreme conditions, in the absence of relevant data to train, I am afraid that the technology is stronger.

On May 25, 2018, Shenzhen issued the implementation opinion of "Intelligent Network Linked Vehicle Road Test Management Regulations (Trial)", which implements the "Industrial and Information Technology Department, the Ministry of Public Security and the Ministry of Transport on Printing and Distributing Intelligent Network Connections". Notice of Vehicle Road Test Management Regulations (Trial) (hereinafter referred to as "Management Code")

The requirements further refined the conditions for applying for road testing in the city, and clarified the process of reviewing applications and the division of responsibilities of relevant departments.

On March 16, 2018, the Shenzhen Communications Commission issued the "Guiding Opinions on the Work of Standardizing Intelligent Road Vehicle Driving Tests in Shenzhen (Draft for Comment)" and publicly solicited opinions.

On December 2, 2017, the Shenzhen Bus Group bus of four “Alphaba Smart Driving Bus Systems” was first commissioned in the Futian Free Trade Zone.

At the end of October 2017, two unmanned driving lines were tested in the South University of Science and Technology.

The “2018GIV Global Smart Car Frontier Summit”, hosted by the Shenzhen Municipal Government and hosted by the China Electric Vehicles 100, will attract more than 500 representatives from governments, institutions, leading companies, start-up teams and financial institutions. Chen Qingtai, chairman of the China Electric Vehicle 100-member Association, Chen Qingquan, academician of the Chinese Academy of Engineering, Dan Sperling, founding director of the Transportation Research Institute of the University of California, Davis, and other top-notch experts from Huawei, Tencent, China, Bosch, GM, Baidu, etc. From the perspectives of technology, application, demonstration, communication, etc., we will conduct in-depth and in-depth discussions on topics such as visual perception, intelligent network, artificial intelligence, intelligent driving test and ecology, 5G and road coordination, and deliver an important message:

No matter what kind of autonomous driving plan, it is ultimately used in the car. From the attitude of most car companies in the moment, the initial realization of autonomous driving in 2020 is a hurdle. Many auto manufacturers have announced that they will launch themselves at this time. Fully automated driving model. However, the real realization of autonomous driving is still a highly practical system engineering, which requires the cooperation of sensors, algorithms and execution units. Moreover, the current reliance on AI technology is getting deeper and deeper. At present, no car company can complete it independently.

In March last year, GM announced that it spent nearly $600 million to invest in the automated driving startup Cruise Automation. At the time, Cruise was a small team of more than 40 people. This move caused controversy in the industry. Tesla had reprimanded the acquisition for a bubble in the autopilot industry.

On February 11 this year, Ford announced that it would invest $1 billion in Argo.ai, an autopilot startup that was established less than three months ago. Although there was no "acquisition" in the official statement, Argo.aiAn exclusive relationship with Ford has been developed, and the two companies will jointly develop the Ford virtual driver system with the goal of helping Ford deliver self-driving cars by 2021.

GM's decision-making cost last year was expensive, but Cruise's contribution to promoting the commercialization of GM auto-driving cars is self-evident. This is one of the reasons why Ford followed suit. There are many similarities between the two cases. At the beginning of the acquisition, Argo.ai and Cruise were both entrepreneurs with urgent need for capital and resources; both GM and Ford were in vehicle manufacturing and supply chain management. Has a wealth of experience, but when opening up new business, the big company is seriously ill and the decision-making efficiency is low.

For the development of autonomous driving technology, Toyota may be one of the lowest-profile manufacturers, but the lack of exposure does not mean that Toyota is really indifferent. In 2014, Toyota did say that it is temporarily unmanned for safety reasons. But if you can believe it, Toyota will allocate a $1 billion budget for autopilot projects in 2015.

On July 17, Toyota announced the formation of its own artificial intelligence technology venture capital company Toyota AI Ventures. The newly established venture capital firm is committed to investing in the field of artificial intelligence technology startups. Currently, the company has received the first $100 million original start-up capital from the Toyota Research Institute (TRI).

So far, the venture capital firm has invested in three start-ups. They are Nauto from Silicon Valley, which designs and develops systems for companies that monitor drivers and the road environment to prevent accidents and poor driving habits. The other is SLAMcore from the UK, which develops algorithms primarily for smart technologies, including drones and driverless cars, such as algorithms developed to help car users create maps based on their surroundings and location; the last It is an Intuition Robotics company from Israel, a startup company engaged in the development of robot life partner technology.

Toyota’s recent statement clearly expresses the intention of most automakers to re-inject AI, algorithms, and robotic startups: “Toyota must adopt both offensive and defensive strategies at the same time.” As a car manufacturer with a history of nearly 80 years, Toyota will consider various other possible development directions.

Including cooperation, mergers and acquisitions and acquisitions. The ultimate goal is to further enhance the competitiveness of Toyota Motor Corporation, and to review GM's and Ford's practices, it is not a good intention.

The source of autonomous driving is to build a safe system. According to the mileage statistics made by the United States in 2015, the total mileage of American cars in the whole year is 4.8 trillion kilometers, with an average of 2 million kilometers of injuries; an average of 147 million kilometers. A fatal accident. This is enough to prove that the level of human driving is not allowed. For example, every 2 million kilometers (one injury accident), the distance from Beijing to Shanghai is about 1200 kilometers, we need to open more than 800 round trips without an injury, it is possible to prove that the automatic driving is more than human Driving safely. Say "system". The automatic driving system consists of three parts: perception, decision, and execution. Of course, mass production of the automatic driving function is by no means a simple superposition of the three. In addition to "safety and system", the most important thing for autonomous driving is the redundant system.

Automated driving All core components include millimeter wave radar, laser radar, monocular camera, binocular camera, ultrasonic sensor, wheel speed sensor, acceleration sensor, gyroscope inertial navigation, positioning sensor, etc. at the sensing level; There are ADAS driving auxiliary pre-controllers, automatic driving pre-controllers, and complete vehicle controllers; at the execution level, there are steering systems, braking systems, and engine management systems.

In recent years in China, the innovation and entrepreneurship in China's domestic autonomous driving field has shown a trend of “coming together”. According to incomplete statistics, from Beijing to Shenzhen, more than 10 autopilot technology solution providers have successively obtained multiple rounds of financing. The self-driving eye "binocular module" to the self-driving brain "car ECU car computer, involving graphic conversion vector, environmental monitoring, scene model, vehicle condition data, driving model, automatic driving operating system algorithm and decision-making, etc. Therefore, the auto-driving decision-making part is characterized by a large number of faucets and complicated technical points. It requires large resources, large technology, and large capital to be invested in the next link. All machine learning is mostly in the information gathering stage, and information at this stage Most of the acquisitions are basically in the camera to turn the graph into a vector, and then store it. The host behind the car is very large, equivalent to a high-capacity storage server.

If a 320*320 image is acquired at a frequency of 30 Hz in a single car, and the speed of a compressed package image library is generated in ten minutes, the driving data within 2 hours will generate 4 GB of image data, combined with the edge calculation.

The local implementation of the method also needs to upload the image to the cloud server through the cellular network (4G, 5G) for in-depth analysis. When the car is running on the road, the V8 is also collecting data continuously, and this data can support us for further security verification.

The I.MX6 integrates FlexCAN, MLB bus, PCI Express® and SATA-2 for superior connectivity, while integrating LVDS, MIPI display ports, MIPI camera ports and HDMI v1.4, is an advanced consumer electronics, automotive and industrial The ideal platform for multimedia applications.

As the core processing for interacting with the car bus, the SK32 uses the Cortex-M4 core with an Ethernet interface that will adapt to all future models.

Both main processors meet the AEC-Q100 and meet the requirements of the car class. In the future, they can be applied in a large number of applications in all automotive fields.

We use I.MX6 to solve high-speed video capture and decompression, and pass the CAN bus data collected by S32K and SENSOR of peripheral GPS to the server through Qualcomm 4G module. It mainly solves the data problem from the acquisition end to the decision-making side, and establishes the driving model through the collected vehicle CAN bus data such as vehicle speed, steering angle and gear position information, and establishes a sample.

 

Schematic diagram of software architecture of intelligent driving experimental platform

Application service layer: Intelligent driving experiment platform software gateway with video parsing and rendering, the main functions are provided by means of service, including bus data running on CAN gateway, remote upgrade service, local data processing, application platform deployment, security Service monitoring and other extended services. Application framework layer: The application framework layer mainly provides the execution environment required for the application service to run, including Java virtual machine (Java runtime environment), Web service engine, development kit, MQTT security module and other extension development packages. Component layer: The component layer mainly provides the core function modules required for the operation of the gateway system, including: power module group, completes the power supply function with S32K, I.MX6, 4G communication module, etc.; S32K module group, completes CAN protocol analysis,

Voltage conversion, data analysis, data group package and other functions; management module group, complete 4G network RF signal reception, device GPS positioning, external Bluetooth connection and other management functions. Linux & driver layer: mainly includes Linux operating system and platform driver. It is the basis of gateway operation, including main chip driver and various network drivers, such as WiFi, Bluetooth, serial port and so on. Artificial intelligence technology is a discipline and technology developed on computer technology. It simulates the human brain on the computer platform for intelligent analysis and processing of images and data, and uses computers to replace human work, thus effectively reducing human resources. The input will control the cost to a minimum. The human brain itself is the most sophisticated and complex system. Artificial intelligence technology simulates and imitates the process of human brain thinking, thus realizing the intelligent control of artificial control. Artificial intelligence technology has been widely used in various fields, including automatic driving control in smart cars. The biggest advantage of artificial intelligence is that it can collect and process information, instead of humans doing a lot of calculations.

 

Direct control of artificial intelligence

The algorithm implementation principle of the tracking phase is to utilize the temporal continuity between successive frames and the spatial correlation of the detection target in the next frame in the current frame to achieve real-time detection and stability. The methods involved are:

(1) Method based on contrast analysis

The target tracking algorithm based on contrast analysis uses the difference in contrast between the target and the background to extract, identify and track the target. Such algorithms can be divided into edge tracking, centroid tracking and centroid tracking according to the tracking reference points. This type of algorithm is not suitable for target tracking in complex backgrounds.

(2) Matching based method

Matching based methods include feature matching, Bayesian tracking, and mean square shift (Mean Shift, MS). The Bayesian tracking framework is divided into Kalman Filter (KF), Particle Filter (PF), Hidden Markov Model (HMMs) and Dynamic Bayesian Model according to the ability to describe the motion distribution. (DBNs), the main difference between these algorithms.

(3) TLD-based method

TLD (Tracking-Learning-detection) is a popular framework in recent years. It integrates detection, tracking and online learning to realize online learning and updating of detectors and achieve long-term tracking of targets. The remarkable feature of the algorithm and the traditional tracking algorithm is that the traditional tracking algorithm and the traditional detection algorithm are combined to solve the deformation and partial occlusion of the tracked target during the tracking process. At the same time, through an improved online learning mechanism, the "significant feature points" of the tracking module and the target model of the detection module and related parameters are continuously updated, so that the tracking effect is more stable and reliable. But the downside is that it only adapts to single-target tracking.

 

At present, two utility model patents have been obtained, namely, "Devices for Controlling Car Door Locks and Windows", Patent No. ZL201621280342.4 and "New Navigation Equipment Based on Projection Display of Automobile Windshield", Patent No. ZL201320310820.1. Currently has obtained 5 software copyrights:

Software Name

Authorization date

registration number

Thread vehicle bus analysis integrated module system V4.0

2015.6.11

1355494

Thread on the car treasure box intelligent terminal system V4.0

2016.5.12

1355490

Thread car networking intelligent terminal system V4.0

2016.1.7

1355429

Thread body bus remote control system V4.0

2016.6.3

1354811

Thread bus management intelligent terminal system V4.0

2016.2.4

1355630

Thread has cooperated with a number of enterprises, research institutes and universities and colleges:

8.1 Threadand Professor Wang Xiang of Sun Yat-sen University developed the project related to TSP remote management service system for Genesis Automotive Group.

8.2 Thread and Zhao Rui of Fudan University Insurance Research Institute have carried out in-depth cooperation and development on the UBI insurance of the Internet of Vehicles.

8.3 Thread and Su Yanzhao, the mid-term university automobile institute, developed for the automotive speed, steering angle and related CAN bus products for the ADAS advanced driver assistance system.

8.4 Thread cooperated with Mr. Yang from Chongqing University of Posts and Telecommunications to provide Changan with time-sharing TBOX terminal products, which has been implemented in Changan New Energy Vehicle.

8.5 Thread has worked with the National Bureau of Metrology, the Environmental Protection Agency and the China Automotive Research Center to develop the Technical Requirements and Communication Format for Remote Discharge Management Vehicle Terminals. The strict requirements for long-range discharge of heavy trucks have been introduced, and DB11/1475- has been introduced. 2017 “Heavy Vehicle Exhaust Pollutant Emission Limits and Measurement Methods”.

8.6 Thread and Wuhan University Zhu Yong jointly developed the vehicle network management system, and carried out related development for video capture picture transition vector, mainly used for lane offset warning.

8.7 Thread has jointly developed the vehicle AI brain with Professor Sun Xuan, Liu Xuan and Wu Yiping of Beijing University of Technology to collect information on vehicle speed and steering angle for active safe driving and machine learning of commercial logistics trucks.

8.8 Thread and the China Automotive Technology and Research Center jointly studied the Toyota Hydrogen Energy Vehicle Principles and Test Methods. In accordance with the requirements of the National Science and Technology Bureau and the Economic and Information Committee, they purchased test equipment and conducted vehicle-wide data testing for the entire vehicle.

8.9 Thread has developed a remote control system for remote control of car door lock light windows for Zhongbao (BMW) Group, collected infrared dynamic warning data in the car, and laid the foundation for the safety of data security system for BMW (China) Group. .

8.10 Thread has developed the “Car Enjoy Box” for SAIC Group, which provides a solid data foundation for the burgeoning of the automotive data big data business application.

The technology of the Thread technology bus is at the international advanced level. The core technology has independent intellectual property rights. It has undertaken and completed a number of national, provincial and municipal key science and technology projects, and participated in the drafting of the emission limit and measurement of heavy vehicle exhaust pollutants. Method and many other national and local standards.

At present, smart cars, artificial intelligence algorithms and data acquisition on the market are mainly based on picture acquisition. Only relevant driving assistance can be achieved, and the level of automatic driving and intelligent vehicles of L3 level cannot be achieved. Most of them are placed in the trunk of the car. The mainframe of the computer is mainly to collect a large amount of data. For the new terminal countries, there is no relevant requirement for hardware related. Through the automotive CAN bus data base for decision-making, there is currently no practical enterprise in the industry. On the one hand, the barriers of the automotive CAN bus technology itself are relatively high, and the second is that the models are very complicated, and the third is to be able to video image data. There are only a handful of companies in the world that combine bus data. Many "cottage boxes" in the industry can collect different data at different ECU nodes by hard-wired access to car sensors, but they also have a serious impact on car seat belts. Many customers have used electrical wiring to cause electrical problems. Failure to burn the car, only the use of wiring-free or bus access, in order to ensure security and real-time data. Most of the images are based on the camera. The images captured by the camera are converted into vector graphics. There are certain algorithms, but the barriers are not high.

In the analysis of the trend, Shen Wanhongyuan pointed out that 2018 is the “outage period” of the vehicle TBOX, and even the “collective riot” state after April. V8 communicates with the host through Canbus to realize the transmission of commands and information, including vehicle status information, vehicle speed, steering angle status information, control commands, etc., through the background system, realizes two-way communication with the cloud in the form of data link at high speed, and predicts that within 5 years. The intelligent driving and automatic driving users under the system will reach 10 million scale.

It is estimated that the project will generate sales income of 40 million yuan, profit of 12.6 million yuan and tax payment of 4 million yuan in the two-year implementation period. Within 5 years, the project is expected to realize sales income of 250 million yuan, profit of nearly 60 million yuan, and tax payment of nearly 20 million yuan.

Excellent talent is the foundation of the company. The company has formulated a series of effective policies for selecting, using, educating and retaining talents, motivating employees to work hard for the company's goals and achieving a win-win situation for both the company and its employees.The company fully considers the short-term, medium-term, and long-term compensation relationships when formulating compensation policies. Salary includes wages, bonuses, and option benefits.

In order to implement this project smoothly, it is planned to add 13 new technology developers and 7 new employees due to project implementation. The company also organized a technical backbone team to systematically train new employees to ensure the healthy development of the technical echelon.

In the next decade, the automotive industry will be characterized by electricization, smart cars, and smart cities. Future cars will further improve the network connectivity of unmanned network-based vehicles and taxis around the world; V8 based on the automotive CAN bus will be deeply integrated with the underlying technology of smart cars to help all travel companies, autopilot companies, big data companies collect Car driving data, combined with their new dispatch system, is faster to reach every corner of the globe.

We have already drone-like autopilot customers, shared car Xiao Ming travel has already laid out autopilot, we have transferred the V8 data to the cloud through a pilot project, installed in these travel companies, autopilot companies, large through V8 In the 1 million cars operated by the data company, the driving mode is analyzed and a model based on location, environment and driving habits is established to better provide auto network services such as artificial intelligence dispatching.

As a result of phased research and development, in June of this year, we began to jointly test the new V8 system in Beijing and Beijing. V8 uses data from factors such as smartphones, taxi locations, weather patterns, etc. to determine the most efficient distribution of networked and shared vehicles.

In addition to improving the efficiency of shunting and increasing profits, V8 will also make full use of the vehicle data collected on its car service platform "TSP", perfect in the country, expansion in Japan and South Korea, its driving data linkage car insurance, and financing Leasing drivers' financial products, predictive maintenance and other services, and controllable network car certification and many other issues.

At the same time, real-time driving data and video will also help platform companies build dynamic maps, Threadup autopilot R&D model samples and update progress.

In the future, the automotive service platform provides a full-process learning system for companies such as carpooling, car rental, shared car, and network car to provide integrated services for them, including customized functions such as management, utilization, and analysis of automobiles. In addition to the above-mentioned automotive services, the platform also includes cooperation with various data-based service companies such as automotive telematics insurance companies, identification companies, and CRM management companies. For example, in January of this year, Toyota Motor and insurance company Aioi Nissay Dowa launched Japan's first car telematics insurance related to driving behavior.

Its principle is: after the V8 is mounted on the vehicle, the V8 can communicate frequently with the cloud to digitize the driver's driving skills, vehicle conditions, traffic conditions and other information. The integrated big data will be put into the cloud system to be managed and analyzed, and if the third party enterprise accesses, then the information can be used to provide services.

These data not only facilitate the real-time update of the map, the OTA update of the in-vehicle system software, but also determine the monthly mileage and driving characteristics of the car based on the driving data of the connected car, and then provide customized insurance preferential strategies for them.

The domestic and even Southeast Asian markets have great potential and have long been a region where giant capital is competing. Thailand has a number of traditional car manufacturers such as Honda, Toyota, and Mazda. The domestic market has become more and more popular. They are paying more and more attention to data and hope to build a new ecological service company through data. The e-Palette Alliance was established at CES in 2018. "." In this alliance, the first partners include Didi Chuxing, Mazda, Amazon, Pizza Hut, Uber, automakers, travel platforms, e-commerce, and catering, covering a wide range of forces such as vehicle suppliers, operating platforms, and merchants.

In general, algorithms, sensors, computing hardware, basic decision-making, and data collection capabilities all affect the development of driverless technology. Based on the development of these technologies, unmanned systems have gradually become a mainstream direction.

2021 will be the first year of driverlessness, when there will be some large companies with a production scale of more than 100,000 unmanned vehicles. When no one is driving, the car industry is likely to be subverted.

 


Telephone consultation
Company address
Solution
Contact information