poc:2025:ondevice-sleepstaging

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
poc:2025:ondevice-sleepstaging [2025/08/19 10:50] fabriciopoc:2025:ondevice-sleepstaging [2025/08/19 13:06] (current) fabricio
Line 5: Line 5:
  
  
-  * **Status**: 🟡 In Progress +  * **Status**:  ❌ Abandoned  
   * **Duration**: August 19 2025   * **Duration**: August 19 2025
   * **Type**: Hardware/Possible Product     * **Type**: Hardware/Possible Product  
Line 27: Line 27:
    * While there is an option to upgrade with external RAM up to an additional 4MB (i.e. 4MB accessed at once), this does not seem to be recommended due to a hard decrease in performance.    * While there is an option to upgrade with external RAM up to an additional 4MB (i.e. 4MB accessed at once), this does not seem to be recommended due to a hard decrease in performance.
    * The main issue with Convolutional-Architectures would be the explosion in intermediate activations if many filters are used.    * The main issue with Convolutional-Architectures would be the explosion in intermediate activations if many filters are used.
-   +   This is a bit difficult to manually tune and explore. 
 +   * int8 is apparently the preferred format.. Not sure if convolutions work that well on that quantization 
 +   * Unclear how the DSP interacts with the NN.. 
 +   * Dynamic USleep would be probably out of scope
  
-==== What Didn't Work ==== +==== Possible Solution Approaches ==== 
-  Failed approaches (and why) +   One would probably need a MLOps pipeline like [[ https://www.edgeimpulse.com/blog/ | https://www.edgeimpulse.com/blog/ ]] for getting solutions in any reasonable amount of time. 
-  Performance bottlenecks discovered +   TensorFlow Lite has suport for Microcontrollers: [[ https://github.com/tensorflow/tflite-micro |TFLite Micro]]. 
-  Incompatibilities found+   [[ https://github.com/espressif/esp-nn | Espressive-NN ]] provides optimized NN Kernels for their ESP32 devices.
  
 ==== Surprises & Insights ==== ==== Surprises & Insights ====
-  * Unexpected discoveries +  * It's barely deep learning anymore. Much more focus on DSP. 
-  * Hidden complexities +  * I have absolutely no idea what happens with the performance on such small models. One would need a full development cycle for a new sleep staging model, completely decoupled from recent advances in sleep staging.
-  * Simpler alternatives found+
  
 ===== Recommendations ===== ===== Recommendations =====
-**Should we proceed?**: Yes/No/Maybe +**Should we proceed?**: No, currently out of scope.
- +
-**If yeswhat needs to happen?** +
  
 ===== External Resources ===== ===== External Resources =====
-  * [[https://docs.example.com|Official Documentation]] +  * [[https://www.espressif.com/sites/default/files/documentation/esp32-s3_datasheet_en.pdf|ESP32 S3 Datasheet]] 
-  * [[https://stackoverflow.com/questions/xxx|Relevant SO Discussion]] +  * [[https://arxiv.org/abs/2212.03332|Edge Impulse MLOps Arxiv Article]]
-  * [[Internal:wiki:related-page|Related Internal Documentation]]+
  • poc/2025/ondevice-sleepstaging.1755600656.txt.gz
  • Last modified: 2025/08/19 10:50
  • by fabricio