====== GSoC 2025: Zephyr project ====== [[:gsoc:google-summer-code-2025|Main GSoC Linux Foundation page: How to apply, deadlines, other workgroups, ...]] ====== Zephyr ====== Zephyr RTOS is a lightweight and flexible real-time operating system tailored for embedded systems and devices with limited resources, like microcontrollers. Developed as a collaborative project hosted by the Linux Foundation, Zephyr supports multiple architectures and is released under the Apache License 2.0. Zephyr is commonly used in IoT applications and other embedded systems where efficiency and reliability are essential. ==== Zephyr Community ==== Website - https://www.zephyrproject.org/ Git - https://github.com/zephyrproject-rtos/zephyr/ Documentation - https://docs.zephyrproject.org/latest/index.html Discord - https://discord.com/invite/Ck7jw53nU2 Getting Started Guide - https://docs.zephyrproject.org/latest/develop/getting_started/index.html Code Licenses - Apache 2.0 ===== Project Proposals ===== ===== Project 1: Running Open-Source ML Models on HiFi4 DSP with Zephyr RTOS ===== **Machine-Learning-related project** //1 contributor medium-size (175 hours)// //Level of difficulty//: Intermediate [[https://docs.zephyrproject.org/latest/index.html|Zephyr]] is an open-source, real-time operating system (RTOS) optimized for resource-constrained devices, making it ideal for IoT and embedded systems. It supports multiple architectures and has a modular design. For machine learning (ML) with Zephyr, developers can integrate frameworks like [[https://github.com/tensorflow/tflite-micro|TensorFlow Lite for Microcontrollers (TFLM)]] or [[https://github.com/edgeimpulse|Edge Impulse]]. These allow small, efficient ML models to run on devices with limited CPU and memory resources. The i.MX series from NXP features powerful DSP cores that can offload computational workloads from the main CPU. This project will focus on leveraging Zephyr RTOS to manage ML workloads on these DSPs efficiently. It will require porting or optimizing existing ML frameworks for the DSP, designing APIs for seamless integration, and demonstrating an end-to-end ML pipeline running on Zephyr. Potential deliverables include support for TFLM on the DSP, and a sample application showcasing the implementation. **Expected Outcomes:** * Integration of ML inference frameworks (such as TFLM) on NXP DSPs running Zephyr * Sample applications demonstrating ML inference (e.g., speech recognition, anomaly detection) * Documentation and tutorials for deploying ML workloads on NXP DSPs * Submit pull requests to Zephyr’s upstream repository **Skills Required:** * C/C++ programming * Embedded systems and real-time operating systems (Zephyr) * Familiarity with TensorFlow Lite Micro or similar lightweight ML frameworks * Familiarity with version control systems (e.g., Git) **Mentors:** * Iuliana Prodan - iuliana.prodan@nxp.com * George Stefan - george.stefan@nxp.com * Daniel Baluta - daniel.baluta@gmail.com