Add pre-commit hooks (#1629)

albatross
Adeeb 2020-06-03 12:54:49 -07:00 committed by GitHub
parent 2144154c32
commit ab83e48ec4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
26 changed files with 99 additions and 96 deletions

View File

@ -6,6 +6,10 @@ repos:
- id: check-json - id: check-json
- id: check-xml - id: check-xml
- id: check-yaml - id: check-yaml
- id: check-merge-conflict
- id: check-symlinks
- id: trailing-whitespace
exclude: '^(pyextra)|(external)|(cereal)|(rednose)|(panda)|(laika)|(opendbc)|(laika_repo)|(rednose_repo)|(phonelibs)|(lib_mpc_export)/'
- repo: https://github.com/pre-commit/mirrors-mypy - repo: https://github.com/pre-commit/mirrors-mypy
rev: master rev: master
hooks: hooks:

View File

@ -1,7 +1,7 @@
openpilot Safety openpilot Safety
====== ======
openpilot is an Adaptive Cruise Control (ACC) and Automated Lane Centering (ALC) system. openpilot is an Adaptive Cruise Control (ACC) and Automated Lane Centering (ALC) system.
Like other ACC and ALC systems, openpilot is a failsafe passive system and it requires the Like other ACC and ALC systems, openpilot is a failsafe passive system and it requires the
driver to be alert and to pay attention at all times. driver to be alert and to pay attention at all times.
@ -22,7 +22,7 @@ hardware-in-the-loop and in-vehicle tests before each software release.
Following Hazard and Risk Analysis and FMEA, at a very high level, we have designed openpilot Following Hazard and Risk Analysis and FMEA, at a very high level, we have designed openpilot
ensuring two main safety requirements. ensuring two main safety requirements.
1. The driver must always be capable to immediately retake manual control of the vehicle, 1. The driver must always be capable to immediately retake manual control of the vehicle,
by stepping on either pedal or by pressing the cancel button. by stepping on either pedal or by pressing the cancel button.
2. The vehicle must not alter its trajectory too quickly for the driver to safely 2. The vehicle must not alter its trajectory too quickly for the driver to safely
react. This means that while the system is engaged, the actuators are constrained react. This means that while the system is engaged, the actuators are constrained

View File

@ -25,14 +25,14 @@ by generating a rotation matrix and multiplying.
Orientation Conventations Orientation Conventations
------ ------
Quaternions, rotation matrices and euler angles are three Quaternions, rotation matrices and euler angles are three
equivalent representations of orientation and all three are equivalent representations of orientation and all three are
used throughout the code base. used throughout the code base.
For euler angles the preferred convention is [roll, pitch, yaw] For euler angles the preferred convention is [roll, pitch, yaw]
which corresponds to rotations around the [x, y, z] axes. All which corresponds to rotations around the [x, y, z] axes. All
euler angles should always be in radians or radians/s unless euler angles should always be in radians or radians/s unless
for plotting or display purposes. For quaternions the hamilton for plotting or display purposes. For quaternions the hamilton
notations is preferred which is [q<sub>w</sub>, q<sub>x</sub>, q<sub>y</sub>, q<sub>z</sub>]. All quaternions notations is preferred which is [q<sub>w</sub>, q<sub>x</sub>, q<sub>y</sub>, q<sub>z</sub>]. All quaternions
should always be normalized with a strictly positive q<sub>w</sub>. **These should always be normalized with a strictly positive q<sub>w</sub>. **These
quaternions are a unique representation of orientation whereas euler angles quaternions are a unique representation of orientation whereas euler angles
@ -49,7 +49,7 @@ EONs are not all mounted in the exact same way. To compensate for the effects of
Example Example
------ ------
To transform global Mesh3D positions and orientations (positions_ecef, quats_ecef) into the local frame described by the To transform global Mesh3D positions and orientations (positions_ecef, quats_ecef) into the local frame described by the
first position and orientation from Mesh3D one would do: first position and orientation from Mesh3D one would do:
``` ```
ecef_from_local = rot_from_quat(quats_ecef[0]) ecef_from_local = rot_from_quat(quats_ecef[0])

View File

@ -261,7 +261,7 @@ void sensors_init(int video0_fd, int sensor_fd, int camera_num) {
power->power_settings[2].power_seq_type = 2; // digital power->power_settings[2].power_seq_type = 2; // digital
power->power_settings[3].power_seq_type = 8; // reset low power->power_settings[3].power_seq_type = 8; // reset low
power = (void*)power + (sizeof(struct cam_cmd_power) + (power->count-1)*sizeof(struct cam_power_settings)); power = (void*)power + (sizeof(struct cam_cmd_power) + (power->count-1)*sizeof(struct cam_power_settings));
unconditional_wait = (void*)power; unconditional_wait = (void*)power;
unconditional_wait->cmd_type = CAMERA_SENSOR_CMD_TYPE_WAIT; unconditional_wait->cmd_type = CAMERA_SENSOR_CMD_TYPE_WAIT;
unconditional_wait->delay = 5; unconditional_wait->delay = 5;
@ -424,7 +424,7 @@ void config_isp(struct CameraState *s, int io_mem_handle, int fence, int request
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}; 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0};
memcpy(buf2, tmp, sizeof(tmp)); memcpy(buf2, tmp, sizeof(tmp));
if (io_mem_handle != 0) { if (io_mem_handle != 0) {
@ -610,7 +610,7 @@ static void camera_open(CameraState *s, VisionBuf* b) {
acquire_dev_cmd.handle_type = CAM_HANDLE_USER_POINTER; acquire_dev_cmd.handle_type = CAM_HANDLE_USER_POINTER;
acquire_dev_cmd.num_resources = 1; acquire_dev_cmd.num_resources = 1;
acquire_dev_cmd.resource_hdl = (uint64_t)&isp_resource; acquire_dev_cmd.resource_hdl = (uint64_t)&isp_resource;
isp_resource.resource_id = CAM_ISP_RES_ID_PORT; isp_resource.resource_id = CAM_ISP_RES_ID_PORT;
isp_resource.length = sizeof(struct cam_isp_in_port_info) + sizeof(struct cam_isp_out_port_info)*(1-1); isp_resource.length = sizeof(struct cam_isp_in_port_info) + sizeof(struct cam_isp_out_port_info)*(1-1);
isp_resource.handle_type = CAM_HANDLE_USER_POINTER; isp_resource.handle_type = CAM_HANDLE_USER_POINTER;
@ -643,7 +643,7 @@ static void camera_open(CameraState *s, VisionBuf* b) {
in_port_info->test_pattern = 0x2; // 0x3? in_port_info->test_pattern = 0x2; // 0x3?
in_port_info->usage_type = 0x0; in_port_info->usage_type = 0x0;
in_port_info->left_start = 0x0; in_port_info->left_start = 0x0;
in_port_info->left_stop = FRAME_WIDTH - 1; in_port_info->left_stop = FRAME_WIDTH - 1;
in_port_info->left_width = FRAME_WIDTH; in_port_info->left_width = FRAME_WIDTH;
@ -664,10 +664,10 @@ static void camera_open(CameraState *s, VisionBuf* b) {
in_port_info->num_out_res = 0x1; in_port_info->num_out_res = 0x1;
in_port_info->data[0] = (struct cam_isp_out_port_info){ in_port_info->data[0] = (struct cam_isp_out_port_info){
.res_type = CAM_ISP_IFE_OUT_RES_RDI_0, .res_type = CAM_ISP_IFE_OUT_RES_RDI_0,
//.format = CAM_FORMAT_MIPI_RAW_12, //.format = CAM_FORMAT_MIPI_RAW_12,
.format = CAM_FORMAT_MIPI_RAW_10, .format = CAM_FORMAT_MIPI_RAW_10,
.width = FRAME_WIDTH, .width = FRAME_WIDTH,
.height = FRAME_HEIGHT, .height = FRAME_HEIGHT,
.comp_grp_id = 0x0, .split_point = 0x0, .secure_mode = 0x0, .comp_grp_id = 0x0, .split_point = 0x0, .secure_mode = 0x0,
}; };
@ -700,7 +700,7 @@ static void camera_open(CameraState *s, VisionBuf* b) {
CAM_SENSOR_PACKET_OPCODE_SENSOR_CONFIG); CAM_SENSOR_PACKET_OPCODE_SENSOR_CONFIG);
sensors_i2c(s, start_reg_array, sizeof(start_reg_array)/sizeof(struct i2c_random_wr_payload), sensors_i2c(s, start_reg_array, sizeof(start_reg_array)/sizeof(struct i2c_random_wr_payload),
CAM_SENSOR_PACKET_OPCODE_SENSOR_STREAMON); CAM_SENSOR_PACKET_OPCODE_SENSOR_STREAMON);
sensors_i2c(s, stop_reg_array, sizeof(stop_reg_array)/sizeof(struct i2c_random_wr_payload), sensors_i2c(s, stop_reg_array, sizeof(stop_reg_array)/sizeof(struct i2c_random_wr_payload),
CAM_SENSOR_PACKET_OPCODE_SENSOR_STREAMOFF); CAM_SENSOR_PACKET_OPCODE_SENSOR_STREAMOFF);
// config csiphy // config csiphy
@ -817,7 +817,7 @@ void cameras_open(DualCameraState *s, VisionBuf *camera_bufs_rear, VisionBuf *ca
s->rear.device_iommu = s->front.device_iommu = s->wide.device_iommu = device_iommu; s->rear.device_iommu = s->front.device_iommu = s->wide.device_iommu = device_iommu;
s->rear.cdm_iommu = s->front.cdm_iommu = s->wide.cdm_iommu = cdm_iommu; s->rear.cdm_iommu = s->front.cdm_iommu = s->wide.cdm_iommu = cdm_iommu;
// subscribe // subscribe
LOG("-- Subscribing"); LOG("-- Subscribing");
static struct v4l2_event_subscription sub = {0}; static struct v4l2_event_subscription sub = {0};
sub.type = 0x8000000; sub.type = 0x8000000;

View File

@ -302,8 +302,8 @@ struct i2c_random_wr_payload init_array_ar0231[] = {
}; };
struct i2c_random_wr_payload poke_array_ov7750[] = { struct i2c_random_wr_payload poke_array_ov7750[] = {
{0x3208, 0x0}, {0x380e, 0x1a}, {0x380f, 0xf0}, {0x3500, 0x0}, {0x3501, 0x0}, {0x3502, 0x10}, {0x350a, 0x0}, {0x350b, 0x10}, {0x3208, 0x10}, {0x3208, 0xa0}, {0x3208, 0x0}, {0x380e, 0x1a}, {0x380f, 0xf0}, {0x3500, 0x0}, {0x3501, 0x0}, {0x3502, 0x10}, {0x350a, 0x0}, {0x350b, 0x10}, {0x3208, 0x10}, {0x3208, 0xa0},
//{0x3208, 0x0}, {0x380e, 0x1a}, {0x380f, 0xf0}, {0x3500, 0x0}, {0x3501, 0x0}, {0x3502, 0x10}, {0x350a, 0x0}, {0x350b, 0x10}, {0x3208, 0x10}, {0x3208, 0xa0}, //{0x3208, 0x0}, {0x380e, 0x1a}, {0x380f, 0xf0}, {0x3500, 0x0}, {0x3501, 0x0}, {0x3502, 0x10}, {0x350a, 0x0}, {0x350b, 0x10}, {0x3208, 0x10}, {0x3208, 0xa0},
}; };
struct i2c_random_wr_payload preinit_array_ov7750[] = { struct i2c_random_wr_payload preinit_array_ov7750[] = {

View File

@ -3,7 +3,7 @@
// convert input rgb image to single channel then conv // convert input rgb image to single channel then conv
__kernel void rgb2gray_conv2d( __kernel void rgb2gray_conv2d(
const __global uchar * input, const __global uchar * input,
__global short * output, __global short * output,
__constant short * filter, __constant short * filter,
__local uchar3 * cached __local uchar3 * cached
@ -23,8 +23,8 @@ __kernel void rgb2gray_conv2d(
// pad // pad
if ( if (
get_global_id(0) < HALF_FILTER_SIZE || get_global_id(0) < HALF_FILTER_SIZE ||
get_global_id(0) > IMAGE_W - HALF_FILTER_SIZE - 1 || get_global_id(0) > IMAGE_W - HALF_FILTER_SIZE - 1 ||
get_global_id(1) < HALF_FILTER_SIZE || get_global_id(1) < HALF_FILTER_SIZE ||
get_global_id(1) > IMAGE_H - HALF_FILTER_SIZE - 1 get_global_id(1) > IMAGE_H - HALF_FILTER_SIZE - 1
) )
@ -32,11 +32,11 @@ __kernel void rgb2gray_conv2d(
barrier(CLK_LOCAL_MEM_FENCE); barrier(CLK_LOCAL_MEM_FENCE);
return; return;
} }
else else
{ {
int localColOffset = -1; int localColOffset = -1;
int globalColOffset = -1; int globalColOffset = -1;
// cache extra // cache extra
if ( get_local_id(0) < HALF_FILTER_SIZE ) if ( get_local_id(0) < HALF_FILTER_SIZE )
{ {
@ -51,7 +51,7 @@ __kernel void rgb2gray_conv2d(
{ {
localColOffset = get_local_id(0) + TWICE_HALF_FILTER_SIZE; localColOffset = get_local_id(0) + TWICE_HALF_FILTER_SIZE;
globalColOffset = HALF_FILTER_SIZE; globalColOffset = HALF_FILTER_SIZE;
cached[ myLocal + HALF_FILTER_SIZE ].x = input[ my * 3 + HALF_FILTER_SIZE * 3 ]; cached[ myLocal + HALF_FILTER_SIZE ].x = input[ my * 3 + HALF_FILTER_SIZE * 3 ];
cached[ myLocal + HALF_FILTER_SIZE ].y = input[ my * 3 + HALF_FILTER_SIZE * 3 + 1]; cached[ myLocal + HALF_FILTER_SIZE ].y = input[ my * 3 + HALF_FILTER_SIZE * 3 + 1];
cached[ myLocal + HALF_FILTER_SIZE ].z = input[ my * 3 + HALF_FILTER_SIZE * 3 + 2]; cached[ myLocal + HALF_FILTER_SIZE ].z = input[ my * 3 + HALF_FILTER_SIZE * 3 + 2];

View File

@ -1,6 +1,6 @@
// calculate variance in each subregion // calculate variance in each subregion
__kernel void var_pool( __kernel void var_pool(
const __global char * input, const __global char * input,
__global ushort * output // should not be larger than 128*128 so uint16 __global ushort * output // should not be larger than 128*128 so uint16
) )
{ {
@ -11,7 +11,7 @@ __kernel void var_pool(
float fsum = 0; float fsum = 0;
char mean, max; char mean, max;
for (int i = 0; i < size; i++) { for (int i = 0; i < size; i++) {
int x_offset = i % X_PITCH; int x_offset = i % X_PITCH;
int y_offset = i / X_PITCH; int y_offset = i / X_PITCH;

View File

@ -43,7 +43,7 @@ void rgb_to_yuv_queue(RGBToYUVState* s, cl_command_queue q, cl_mem rgb_cl, cl_me
err = clSetKernelArg(s->rgb_to_yuv_krnl, 1, sizeof(cl_mem), &yuv_cl); err = clSetKernelArg(s->rgb_to_yuv_krnl, 1, sizeof(cl_mem), &yuv_cl);
assert(err == 0); assert(err == 0);
const size_t work_size[2] = { const size_t work_size[2] = {
(size_t)(s->width + (s->width % 4 == 0 ? 0 : (4 - s->width % 4))) / 4, (size_t)(s->width + (s->width % 4 == 0 ? 0 : (4 - s->width % 4))) / 4,
(size_t)(s->height + (s->height % 4 == 0 ? 0 : (4 - s->height % 4))) / 4 (size_t)(s->height + (s->height % 4 == 0 ? 0 : (4 - s->height % 4))) / 4
}; };
cl_event event; cl_event event;

View File

@ -70,7 +70,7 @@ void tbuffer_dispatch(TBuffer *tb, int idx) {
efd_write(tb->efd); efd_write(tb->efd);
pthread_cond_signal(&tb->cv); pthread_cond_signal(&tb->cv);
pthread_mutex_unlock(&tb->lock); pthread_mutex_unlock(&tb->lock);
} }
int tbuffer_acquire(TBuffer *tb) { int tbuffer_acquire(TBuffer *tb) {
@ -344,7 +344,7 @@ void pool_push(Pool *s, int idx) {
for (int i=0; i<POOL_MAX_QUEUES; i++) { for (int i=0; i<POOL_MAX_QUEUES; i++) {
PoolQueue *c = &s->queues[i]; PoolQueue *c = &s->queues[i];
if (!c->inited) continue; if (!c->inited) continue;
pthread_mutex_lock(&c->lock); pthread_mutex_lock(&c->lock);
if (((c->head+1) % c->num) == c->tail) { if (((c->head+1) % c->num) == c->tail) {
// queue is full. skip for now // queue is full. skip for now

View File

@ -98,7 +98,7 @@ extern "C" FramebufferState* framebuffer_init(
assert(success); assert(success);
printf("egl version %d.%d\n", s->egl_major, s->egl_minor); printf("egl version %d.%d\n", s->egl_major, s->egl_minor);
EGLint num_configs; EGLint num_configs;
success = eglChooseConfig(s->display, attribs, &s->config, 1, &num_configs); success = eglChooseConfig(s->display, attribs, &s->config, 1, &num_configs);
assert(success); assert(success);

View File

@ -138,7 +138,7 @@ int main(int argc, char *argv[]) {
} }
if (sm.updated("cameraOdometry")){ if (sm.updated("cameraOdometry")){
localizer.handle_log(sm["cameraOdometry"]); localizer.handle_log(sm["cameraOdometry"]);
} }
} }
return 0; return 0;
} }

View File

@ -15,7 +15,7 @@ public:
int LogFrame(uint64_t ts, const uint8_t *y_ptr, const uint8_t *u_ptr, const uint8_t *v_ptr, int *frame_segment) { int LogFrame(uint64_t ts, const uint8_t *y_ptr, const uint8_t *u_ptr, const uint8_t *v_ptr, int *frame_segment) {
std::lock_guard<std::recursive_mutex> guard(lock); std::lock_guard<std::recursive_mutex> guard(lock);
if (opening) { if (opening) {
Open(next_path); Open(next_path);
opening = false; opening = false;

View File

@ -729,7 +729,7 @@ int main(int argc, char** argv) {
for (auto s : socks){ for (auto s : socks){
delete s; delete s;
} }
delete poller; delete poller;
delete s.ctx; delete s.ctx;
return 0; return 0;

View File

@ -44,7 +44,7 @@ void* live_thread(void *arg) {
while (!do_exit) { while (!do_exit) {
if (sm.update(10) > 0){ if (sm.update(10) > 0){
auto extrinsic_matrix = sm["liveCalibration"].getLiveCalibration().getExtrinsicMatrix(); auto extrinsic_matrix = sm["liveCalibration"].getLiveCalibration().getExtrinsicMatrix();
Eigen::Matrix<float, 3, 4> extrinsic_matrix_eigen; Eigen::Matrix<float, 3, 4> extrinsic_matrix_eigen;
for (int i = 0; i < 4*3; i++){ for (int i = 0; i < 4*3; i++){

View File

@ -180,7 +180,7 @@ if (intercept) {
//disassemble((uint32_t *)qcmd.data(), qcmd.size()/4); //disassemble((uint32_t *)qcmd.data(), qcmd.size()/4);
//queue_cmds.push_back(qcmd); //queue_cmds.push_back(qcmd);
} }
#ifdef DUMP #ifdef DUMP
char tmp[0x100]; char tmp[0x100];
snprintf(tmp, sizeof(tmp), "/tmp/thneed/run_%d_%d", run_num, ioctl_num++); snprintf(tmp, sizeof(tmp), "/tmp/thneed/run_%d_%d", run_num, ioctl_num++);
@ -515,7 +515,7 @@ cl_mem clCreateImage(cl_context context, cl_mem_flags flags, const cl_image_form
assert(image_desc->image_array_size == 0); assert(image_desc->image_array_size == 0);
assert(image_desc->image_slice_pitch == 0); assert(image_desc->image_slice_pitch == 0);
//assert(image_desc->image_width * image_desc->image_height * 2 == image_desc->image_row_pitch); //assert(image_desc->image_width * image_desc->image_height * 2 == image_desc->image_row_pitch);
image img; image img;
img.image_width = image_desc->image_width; img.image_width = image_desc->image_width;
img.image_height = image_desc->image_height; img.image_height = image_desc->image_height;
@ -689,7 +689,7 @@ int main(int argc, char* argv[]) {
maps[len] = '\0'; maps[len] = '\0';
fclose(f); fclose(f);
printf("%s\n", maps);*/ printf("%s\n", maps);*/
printf("buffers: %lu images: %lu\n", buffers.size(), images.size()); printf("buffers: %lu images: %lu\n", buffers.size(), images.size());
printf("queues: %lu\n", queue_cmds.size()); printf("queues: %lu\n", queue_cmds.size());

View File

@ -14,8 +14,8 @@
#pragma OPENCL EXTENSION cl_khr_fp16 : enable #pragma OPENCL EXTENSION cl_khr_fp16 : enable
__kernel void gemm(const int M, const int N, const int K, __kernel void gemm(const int M, const int N, const int K,
read_only image2d_t A, read_only image2d_t A,
read_only image2d_t B, read_only image2d_t B,
write_only image2d_t C) write_only image2d_t C)
{ {
const sampler_t smp = CLK_NORMALIZED_COORDS_FALSE | const sampler_t smp = CLK_NORMALIZED_COORDS_FALSE |

View File

@ -128,7 +128,7 @@ int main(int argc, char *argv[]) {
M = N = K = 1024; M = N = K = 1024;
//M = 128; K = 2112; N = 352; //M = 128; K = 2112; N = 352;
cl_kernel kern = clCreateKernel(prog, "gemm", &err); cl_kernel kern = clCreateKernel(prog, "gemm", &err);
assert(err == 0); assert(err == 0);
printf("creating kernel %p\n", kern); printf("creating kernel %p\n", kern);
@ -255,7 +255,7 @@ int main(int argc, char *argv[]) {
clSetKernelArg(kern, 20, sizeof(n), &n); clSetKernelArg(kern, 20, sizeof(n), &n);
clSetKernelArg(kern, 21, sizeof(n), &n); clSetKernelArg(kern, 21, sizeof(n), &n);
v = 16; clSetKernelArg(kern, 22, sizeof(v), &v); v = 16; clSetKernelArg(kern, 22, sizeof(v), &v);
size_t global_work_size[3] = {88, 4, 8}; size_t global_work_size[3] = {88, 4, 8};
size_t local_work_size[3] = {4, 4, 8}; size_t local_work_size[3] = {4, 4, 8};
#endif #endif

View File

@ -81,7 +81,7 @@ assert(font >= 0);
float lineh; float lineh;
nvgTextMetrics(vg, NULL, NULL, &lineh); nvgTextMetrics(vg, NULL, NULL, &lineh);
// nvgTextBox strips leading whitespace. We have to reimplement // nvgTextBox strips leading whitespace. We have to reimplement
char * next = strtok(text, "\n"); char * next = strtok(text, "\n");
while (next != NULL){ while (next != NULL){

View File

@ -18,7 +18,7 @@ static uint32_t read24be(const uint8_t* ptr) {
} }
static void write32le(FILE *of, uint32_t v) { static void write32le(FILE *of, uint32_t v) {
uint8_t va[4] = { uint8_t va[4] = {
v & 0xff, (v >> 8) & 0xff, (v >> 16) & 0xff, (v >> 24) & 0xff v & 0xff, (v >> 8) & 0xff, (v >> 16) & 0xff, (v >> 24) & 0xff
}; };
fwrite(va, 1, sizeof(va), of); fwrite(va, 1, sizeof(va), of);
} }
@ -135,7 +135,7 @@ static void hevc_index(const uint8_t *data, size_t file_size, FILE *of_prefix, F
bs_get(&bs, 1); bs_get(&bs, 1);
} }
uint32_t slice_type = bs_ue(&bs); uint32_t slice_type = bs_ue(&bs);
// write the index // write the index
write32le(of_index, slice_type); write32le(of_index, slice_type);
write32le(of_index, ptr - data); write32le(of_index, ptr - data);
@ -244,7 +244,7 @@ static void h264_index(const uint8_t *data, size_t file_size, FILE *of_prefix, F
uint32_t pic_parameter_set_id = bs_ue(&bs); uint32_t pic_parameter_set_id = bs_ue(&bs);
uint32_t frame_num = bs_get(&bs, sps_log2_max_frame_num_minus4+4); uint32_t frame_num = bs_get(&bs, sps_log2_max_frame_num_minus4+4);
if (first_mb_in_slice == 0) { if (first_mb_in_slice == 0) {
write32le(of_index, slice_type); write32le(of_index, slice_type);
write32le(of_index, ptr - data); write32le(of_index, ptr - data);

View File

@ -71,7 +71,7 @@ LogReader::LogReader(const QString& file, Events *events_, QReadWriteLock* event
while (1) { while (1) {
mergeEvents(cdled.get()); mergeEvents(cdled.get());
} }
}); });
} }
void LogReader::mergeEvents(int dled) { void LogReader::mergeEvents(int dled) {

View File

@ -4,7 +4,7 @@
#include <capnp/dynamic.h> #include <capnp/dynamic.h>
#include <capnp/schema.h> #include <capnp/schema.h>
// include the dynamic struct // include the dynamic struct
#include "cereal/gen/cpp/car.capnp.c++" #include "cereal/gen/cpp/car.capnp.c++"
#include "cereal/gen/cpp/log.capnp.c++" #include "cereal/gen/cpp/log.capnp.c++"
@ -24,7 +24,7 @@ static inline uint64_t nanos_since_boot() {
} }
Unlogger::Unlogger(Events *events_, QReadWriteLock* events_lock_, QMap<int, FrameReader*> *frs_, int seek) Unlogger::Unlogger(Events *events_, QReadWriteLock* events_lock_, QMap<int, FrameReader*> *frs_, int seek)
: events(events_), events_lock(events_lock_), frs(frs_) { : events(events_), events_lock(events_lock_), frs(frs_) {
ctx = Context::create(); ctx = Context::create();
YAML::Node service_list = YAML::LoadFile("../../cereal/service_list.yaml"); YAML::Node service_list = YAML::LoadFile("../../cereal/service_list.yaml");

View File

@ -44,7 +44,7 @@ class Window : public QWidget {
QMap<int, LogReader*> lrs; QMap<int, LogReader*> lrs;
QMap<int, FrameReader*> frs; QMap<int, FrameReader*> frs;
// cache the bar // cache the bar
QPixmap *px = NULL; QPixmap *px = NULL;
@ -72,7 +72,7 @@ Window::Window(QString route_, int seek, int use_api_) : route(route_), use_api(
file.open(QIODevice::ReadOnly | QIODevice::Text); file.open(QIODevice::ReadOnly | QIODevice::Text);
settings = file.readAll(); settings = file.readAll();
file.close(); file.close();
QJsonDocument sd = QJsonDocument::fromJson(settings.toUtf8()); QJsonDocument sd = QJsonDocument::fromJson(settings.toUtf8());
qWarning() << sd.isNull(); // <- print false :) qWarning() << sd.isNull(); // <- print false :)
QJsonObject sett2 = sd.object(); QJsonObject sett2 = sd.object();
@ -97,7 +97,7 @@ bool Window::addSegment(int i) {
lrs.insert(i, new LogReader(fn, &events, &events_lock, &unlogger->eidx)); lrs.insert(i, new LogReader(fn, &events, &events_lock, &unlogger->eidx));
} else { } else {
QString log_fn = this->log_paths.at(i).toString(); QString log_fn = this->log_paths.at(i).toString();
lrs.insert(i, new LogReader(log_fn, &events, &events_lock, &unlogger->eidx)); lrs.insert(i, new LogReader(log_fn, &events, &events_lock, &unlogger->eidx));
} }
@ -114,8 +114,8 @@ bool Window::addSegment(int i) {
QString camera_fn = this->camera_paths.at(i).toString(); QString camera_fn = this->camera_paths.at(i).toString();
frs.insert(i, new FrameReader(qPrintable(camera_fn))); frs.insert(i, new FrameReader(qPrintable(camera_fn)));
} }
return true; return true;
} }
return false; return false;
@ -193,9 +193,9 @@ void Window::paintEvent(QPaintEvent *event) {
tt.drawLine(lt, 300-lvv, rt, 300-vv); tt.drawLine(lt, 300-lvv, rt, 300-vv);
if (enabled) { if (enabled) {
tt.setPen(Qt::green); tt.setPen(Qt::green);
} else { } else {
tt.setPen(Qt::blue); tt.setPen(Qt::blue);
} }
tt.drawLine(rt, 300, rt, 600); tt.drawLine(rt, 300, rt, 600);
@ -237,7 +237,7 @@ int main(int argc, char *argv[]) {
QApplication app(argc, argv); QApplication app(argc, argv);
QString route(argv[1]); QString route(argv[1]);
int use_api = QString::compare(QString("use_api"), route, Qt::CaseInsensitive) == 0; int use_api = QString::compare(QString("use_api"), route, Qt::CaseInsensitive) == 0;
int seek = QString(argv[2]).toInt(); int seek = QString(argv[2]).toInt();
printf("seek: %d\n", seek); printf("seek: %d\n", seek);
@ -251,7 +251,7 @@ int main(int argc, char *argv[]) {
} }
Window window(route, seek, use_api); Window window(route, seek, use_api);
window.resize(1920, 800); window.resize(1920, 800);
window.setWindowTitle("nui unlogger"); window.setWindowTitle("nui unlogger");
window.show(); window.show();

View File

@ -9,4 +9,3 @@ if [ $# -gt 0 ]; then
else else
echo "Please Enter a Route" echo "Please Enter a Route"
fi fi

View File

@ -10,24 +10,24 @@ git clone https://github.com/commaai/openpilot.git
# Add export PYTHONPATH=$HOME/openpilot to your bashrc # Add export PYTHONPATH=$HOME/openpilot to your bashrc
# Have a working tensorflow+keras in python3.7.3 (with [packages] in openpilot/Pipfile) # Have a working tensorflow+keras in python3.7.3 (with [packages] in openpilot/Pipfile)
``` ```
## Install (in tab 1) ## Install (in tab 1)
``` ```
cd ~/openpilot/tools/sim cd ~/openpilot/tools/sim
./start_carla.sh # install CARLA 0.9.7 and start the server ./start_carla.sh # install CARLA 0.9.7 and start the server
``` ```
## openpilot (in tab 2) ## openpilot (in tab 2)
``` ```
cd ~/openpilot/selfdrive/ cd ~/openpilot/selfdrive/
PASSIVE=0 NOBOARD=1 ./manager.py PASSIVE=0 NOBOARD=1 ./manager.py
``` ```
## bridge (in tab 3) ## bridge (in tab 3)
``` ```
# links carla to openpilot, will "start the car" according to manager # links carla to openpilot, will "start the car" according to manager
cd ~/openpilot/tools/sim cd ~/openpilot/tools/sim
./bridge.py ./bridge.py
``` ```
## Controls ## Controls
Now you can control the simulator with the keys: Now you can control the simulator with the keys:
1: Cruise up 5 mph 1: Cruise up 5 mph

View File

@ -15,4 +15,4 @@ if [ ! -d carla ]; then
fi fi
cd carla cd carla
./CarlaUE4.sh ./CarlaUE4.sh

View File

@ -1,49 +1,49 @@
Run openpilot with webcam on PC/laptop Run openpilot with webcam on PC/laptop
===================== =====================
What's needed: What's needed:
- Ubuntu 16.04 - Ubuntu 16.04
- Python 3.7.3 - Python 3.7.3
- GPU (recommended) - GPU (recommended)
- Two USB webcams, at least 720p and 78 degrees FOV (e.g. Logitech C920/C615) - Two USB webcams, at least 720p and 78 degrees FOV (e.g. Logitech C920/C615)
- [Car harness](https://comma.ai/shop/products/comma-car-harness) w/ black panda (or the outdated grey panda/giraffe combo) to connect to your car - [Car harness](https://comma.ai/shop/products/comma-car-harness) w/ black panda (or the outdated grey panda/giraffe combo) to connect to your car
- [Panda paw](https://comma.ai/shop/products/panda-paw) (or USB-A to USB-A cable) to connect panda to your computer - [Panda paw](https://comma.ai/shop/products/panda-paw) (or USB-A to USB-A cable) to connect panda to your computer
- Tape, Charger, ... - Tape, Charger, ...
That's it! That's it!
## Clone openpilot and install the requirements ## Clone openpilot and install the requirements
``` ```
cd ~ cd ~
git clone https://github.com/commaai/openpilot.git git clone https://github.com/commaai/openpilot.git
``` ```
- Follow [this readme](https://github.com/commaai/openpilot/tree/master/tools) to install the requirements - Follow [this readme](https://github.com/commaai/openpilot/tree/master/tools) to install the requirements
- Add line "export PYTHONPATH=$HOME/openpilot" to your ~/.bashrc - Add line "export PYTHONPATH=$HOME/openpilot" to your ~/.bashrc
- You also need to install tensorflow 2.2 and nvidia drivers: nvidia-xxx/cuda10.0/cudnn7.6.5 - You also need to install tensorflow 2.2 and nvidia drivers: nvidia-xxx/cuda10.0/cudnn7.6.5
- Install [OpenCL Driver](http://registrationcenter-download.intel.com/akdlm/irc_nas/vcp/15532/l_opencl_p_18.1.0.015.tgz) - Install [OpenCL Driver](http://registrationcenter-download.intel.com/akdlm/irc_nas/vcp/15532/l_opencl_p_18.1.0.015.tgz)
- (Note: the code assumes cl platforms order to be 0.GPU/1.CPU when running clinfo; if reverse, change the -1 to -2 in selfdrive/modeld/modeld.cc#L130; helping us refactor this mess is encouraged) - (Note: the code assumes cl platforms order to be 0.GPU/1.CPU when running clinfo; if reverse, change the -1 to -2 in selfdrive/modeld/modeld.cc#L130; helping us refactor this mess is encouraged)
- Install [OpenCV4](https://www.pyimagesearch.com/2018/08/15/how-to-install-opencv-4-on-ubuntu/) (ignore the Python part) - Install [OpenCV4](https://www.pyimagesearch.com/2018/08/15/how-to-install-opencv-4-on-ubuntu/) (ignore the Python part)
## Build openpilot for webcam ## Build openpilot for webcam
``` ```
cd ~/openpilot cd ~/openpilot
``` ```
- check out selfdrive/camerad/cameras/camera_webcam.cc line72&146 before building if any camera is upside down - check out selfdrive/camerad/cameras/camera_webcam.cc line72&146 before building if any camera is upside down
``` ```
scons use_webcam=1 scons use_webcam=1
touch prebuilt touch prebuilt
``` ```
## Connect the hardwares ## Connect the hardwares
- Connect the road facing camera first, then the driver facing camera - Connect the road facing camera first, then the driver facing camera
- (default indexes are 1 and 2; can be modified in selfdrive/camerad/cameras/camera_webcam.cc) - (default indexes are 1 and 2; can be modified in selfdrive/camerad/cameras/camera_webcam.cc)
- Connect your computer to panda - Connect your computer to panda
## GO ## GO
``` ```
cd ~/openpilot/tools/webcam cd ~/openpilot/tools/webcam
./accept_terms.py # accept the user terms so that thermald can detect the car started ./accept_terms.py # accept the user terms so that thermald can detect the car started
cd ~/openpilot/selfdrive cd ~/openpilot/selfdrive
PASSIVE=0 NOSENSOR=1 WEBCAM=1 ./manager.py PASSIVE=0 NOSENSOR=1 WEBCAM=1 ./manager.py
``` ```
- Start the car, then the UI should show the road webcam's view - Start the car, then the UI should show the road webcam's view
- Adjust and secure the webcams (you can run tools/webcam/front_mount_helper.py to help mount the driver camera) - Adjust and secure the webcams (you can run tools/webcam/front_mount_helper.py to help mount the driver camera)
- Finish calibration and engage! - Finish calibration and engage!