














Convert the AI model to compatible with different AI HW (Intel, Nvidia, MediaTek, Qualcomm,… )
Migrate Video Analytics pipeline for different HW
E.g: Converted Nvidia: Deepstream ➝ Intel: DL Streamer
Software Library version compatible with different GPU Cards (E.g- Update Torch and Cuda library for Object Detection on RTX A4000 to increase inference speed from 90 ➝ 120 fps)
Reduce training time by parallel training with multiple GPUs
















FIFO, PISO, MUX, Hash, DDR RAM Handling, VHDL & Verilog, Xilinx ISE
GSM/GPRS, Zigbee, BLE, Beacon, RFID, WIFI, 3G, MQTT
Keil C, IAR, Altium, Buildroot, Openembedded, Openwrt

VxWorks, Linux/Real-time Linux, Android, ucLinux, ttylinux, gentoo
ARM, x86, MIPS, PIC, AVR32
USB, Ethernet, LCD, CF/MMC/SD













