TL;DR: We propose a novel SSL method for Spiking Neural Networks (SNNs) that combines pseudo-labeling and consistency regularization within a FixMatch framework, leveraging temporal dynamics for agreement-based pseudo-labeling without extra parameters.

Overview

We introduce SpikeMatch, an agreement-based pseudo-labeling method that treats the temporal outputs of an SNN model as multiple predictions within a co-training framework. By simply increasing the leakage factor, addresses common challenges in semi-supervised learning, such as confirmation bias, by effectively leveraging distinct semantic features.

Diverse predictions of SNNs

co-training

Unlike conventional ANN-based models that require extra parameters to generate multiple views, SNNs naturally produce diverse predictions across time steps through binary spike signals. SpikeMatch leverages this temporal dynamic to enable agreement-based pseudo-labeling, offering a parameter-free approach to diversity and robustness.

Motivation

We conduct an empirical analysis of the leakage factor in Leaky Integrate-and-Fire (LIF) neurons. Our results show that higher leakage from the previous time step leads to more diverse predictions, which are crucial for learning distinctive semantic features in semi-supervised settings with limited labeled data.

Discriminative features

We visualize the temporal feature responses using the Spiking Activation Map (SAM). As the leakage factor increases, the SAM outputs become more diverse across time steps, providing the model with richer feature variations that facilitate learning of class-discriminative representations.

Comparisons diagram

Spiking Activation Map (SAM) on leakage factor and time step variance.

Cosine similarity

Without explicit constraints, a higher leakage factor consistently lowers cosine similarity, indicating more diverse temporal features.

Comparisons diagram

Each column represents the average of 6 cosine similarity values computed across 4 time steps for each τ, at seeds 0, 1, and 2. The cosine similarity is calculated from 6 combinations of features from the 4 time steps at the last layer.

Other diversity metrics

Comparisons diagram

To validate that a higher leakage factor (τ) improves temporal diversity, we analyze CIFAR-10 using KL divergence, temporal variance, and effective rank. All three metrics consistently increase as τ grows, confirming more diverse and distinct temporal features.

Co-training framework

Overall framework

(left): A labeled image (red) and unlabeled images (blue), (green) pass through the SNN, producing M averaged predictions. Weakly augmented outputs guide strong ones via agreement-based pseudo-labeling.

(right): A pseudo-label (green) is adopted only when the remaining M−1 predictions (blue) agree on the same class, with the highest-confidence output selected.

SpikeMatch treats temporal outputs as multiple views within one SNN. Agreement among weakly augmented predictions generates reliable pseudo-labels for strong augmentations, enabling co-training without extra parameters.

Experimental results

We evaluate SpikeMatch on CIFAR-10, STL-10, ImageNet, and neuromorphic datasets, demonstrating its effectiveness across diverse benchmarks for semi-supervised image classification.

Results on CIFAR-10 & CIFAR-100 under various label conditions

SpikeMatch consistently outperforms existing semi-supervised learning methods across different label regimes on CIFAR-10 and CIFAR-100. Even with extremely few labeled samples per class, our approach achieves higher accuracy and stability by leveraging temporal agreement for reliable pseudo-labeling.

Dataset CIFAR-10 CIFAR-100
Labels 4 25 400 4 25 100
UDA 69.72±6.65 87.76±0.21 90.06±0.06 44.66±1.63 60.35±0.67 66.51±0.15
FixMatch 67.97±4.42 86.84±0.91 89.40±0.23 38.39±0.91 58.40±0.48 64.75±0.17
AdaMatch 82.13±6.98 87.15±0.62 89.44±0.11 44.54±2.52 59.52±0.24 65.86±0.44
FreeMatch 85.65±4.16 87.86±0.34 89.64±0.27 46.49±2.75 62.01±0.27 67.10±0.35
SoftMatch 84.27±7.69 87.97±0.18 89.91±0.28 45.99±3.06 61.97±0.53 67.35±0.08
RegMixMatch 82.59±4.02 87.36±0.55 90.08±0.65 38.49±3.09 60.27±1.96 66.69±0.24
SpikeMatch w/o TET 87.98±0.51 88.13±0.37 91.00±0.002 47.45±1.79 63.18±0.09 69.37±0.003
SpikeMatch 88.13±0.79 88.36±0.06 90.93±0.04 47.52±2.18 63.34±0.04 69.50±0.17

Accuracy (%) on CIFAR-10 and CIFAR-100 of 3 different random seeds. The best results is in bold.

ImageNet with 100 labels per class

On ImageNet with SEW-ResNet-50, SpikeMatch outperforms SoftMatch, showing strong effectiveness even with large-scale data and deep backbones.

Methods
Top-1 Top-5
SoftMatch 45.66 71.44
SpikeMatch 49.68 75.07
Accuracy (%) on ImageNet using 100 labels per class.

DVS-CIFAR-10 with 1% labels

We evaluate SpikeMatch on the event-based CIFAR10-DVS dataset, preserving the temporal structure of events and capturing their intrinsic diverse patterns.

DVS-CIFAR-10
FreeMatch 36.40
SoftMatch 37.40
SpikeMatch 49.80
Accuracy (%) on DVS-CIFAR10 with 1% labels.

STL-10 with 4 labels per class

Seed
0 1 2 Mean
FreeMatch 56.73 49.94 57.00 54.55±4.00
SoftMatch 58.40 50.58 58.14 55.70±4.44
SpikeMatch 62.36 64.69 61.36 62.80±1.70
Accuracy (%) on STL-10 with 40 labeled samples.

On the more challenging STL-10 dataset with high-resolution images, SpikeMatch outperforms SoftMatch while maintaining stable performance.

Analysis

Energy and label efficiency

On CIFAR-10 with only one label per class, SpikeMatch surpasses state-of-the-art ANN methods while preserving over 20× higher energy efficiency, making it practical for low-label, low-power settings.

Seed
Accuracy Energy Efficiency
FixMatch (ANN) 78.69±9.13 1X
SoftMatch (ANN) 79.04±12.54 1X
SpikeMatch 81.92±3.06 20.8X

Accuracy (%) and energy consumption (including MAC) on CIFAR10 with only one label per class.

Leakage configuration

Leakage factor
0 0.1 0.3 0.5 0.8
SoftMatch 79.71 84.27 84.41 84.27 85.52
SpikeMatch 80.16 (+0.45) 85.43 (+1.16) 86.33 (+1.92) 88.13 (+3.86) 88.17 (+2.65)
Averaged accuracy (%) over three seeds on CIFAR-10 with 4 labels per class under different leakage factors.

Both SoftMatch and SpikeMatch benefit from a higher leakage factor, but the performance gain is more pronounced in SpikeMatch. Our agreement-based approach more effectively exploits subtle features from temporal diversity, resulting in greater improvements for semi-supervised learning.

Visualization of effectiveness of SpikeMatch

Attention analysis
Unlike FreeMatch (first row) and SoftMatch (second row), our method (last row) more effectively captures multiple discriminative features through the agreement-based approach.
Training efficiency curves
FreeMatch is confused by the bird class, while SoftMatch struggles with the deer class. In contrast, SpikeMatch identifies key features such as the beak of the bird and the antlers of the deer, producing a more diagonal confusion matrix and reducing class confusion.
Permutation robustness bar chart
FreeMatch and SoftMatch fail to clearly distinguish cats and dogs, whereas SpikeMatch captures distinctive features of both classes, leading to a sharper confusion matrix with fewer misclassifications.

Conclusion

In this work, we presented SpikeMatch, a semi-supervised learning (SSL) framework for SNNs that utilize their temporal dynamics via the leakage factor to achieve diverse pseudo-labeling within a co-training framework. By utilizing agreement among multiple predictions from a single SNN, SpikeMatch produces reliable pseudo-labels from weakly augmented unlabeled samples to train on strongly augmented ones, effectively mitigating confirmation bias and enhancing feature learning with limited labeled data. Experiments show that SpikeMatch outperforms existing SSL methods adapted to SNNs across benchmarks.

Citation

If you use this work or find it helpful, please consider citing:

    @misc{yang2025spikematchsemisupervisedlearningtemporal,
      title={SpikeMatch: Semi-Supervised Learning with Temporal Dynamics of Spiking Neural Networks}, 
      author={Jini Yang and Beomseok Oh and Seungryong Kim and Sunok Kim},
      year={2025},
      eprint={2509.22581},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2509.22581}, 
    }