Solution - Simultrain
[ T_\textseq = T_\textsend + T_\textforward + T_\textbackward + T_\textrecv ]
where ( T_\textsend ) and ( T_\textrecv ) depend on bandwidth, and ( T_\textforward, T_\textbackward ) on model size. For large models (e.g., ResNet-50), ( T_\textsend \gg T_\textforward ) on typical 4G/5G networks.
[ \tilde\nabla_k = \nabla \ell(w^(e)_k; x_k) + \alpha \cdot (w^(c)_k - w^(e)_k) ] simultrain solution
where ( \alpha ) is a learned or fixed extrapolation coefficient (set to 0.5 in our experiments). This linear correction term approximates the gradient at the cloud's version without recomputing forward pass. Edge and cloud maintain version counters ( v_e, v_c ). The cloud applies updates immediately. The edge applies received deltas in order but without locking. To prevent divergence, we use a soft reconciliation step every ( R ) iterations:
of SimulTrain is that the forward pass of one batch and the backward pass of a previous batch can overlap in time, if we carefully manage parameter versions and gradients. This is analogous to CPU pipelining but applied to distributed training across heterogeneous compute nodes. This linear correction term approximates the gradient at
Authors: A. Chen, M. Watanabe, L. K. Singh Affiliation: Institute for Distributed Intelligence, Stanford University & RIKEN Center for Advanced Intelligence Project Abstract The proliferation of edge devices and cloud computing has given rise to hybrid machine learning pipelines. However, traditional training methods suffer from sequential dependency : the edge device collects data, transmits it to the cloud, and only then updates the model. This introduces latency, bandwidth inefficiency, and poor adaptation to non-stationary data streams. We propose SimulTrain , a simultaneous training solution that decouples forward and backward passes across edge and cloud nodes, enabling real-time collaborative learning. SimulTrain uses a novel gradient forecast mechanism and asynchronous weight reconciliation to ensure convergence without waiting for full round-trip communication. Theoretical analysis proves that SimulTrain achieves the same convergence rate as synchronous SGD under bounded delay assumptions. Empirically, on video analytics and IoT sensor fusion tasks, SimulTrain reduces training latency by 78%, cuts bandwidth usage by 65%, and maintains model accuracy within 0.5% of the centralized baseline. Our solution is open-sourced at github.com/simultrain. 1. Introduction Edge-cloud collaboration is the backbone of modern AI systems—autonomous vehicles, smart factories, and wearable health monitors. A typical workflow involves: (i) edge devices collect data, (ii) send mini-batches to the cloud, (iii) cloud updates the model, and (iv) cloud sends back new weights. This sequential pipeline wastes idle compute on the edge and underutilizes cloud accelerators. Worse, when network latency exceeds compute time, the system becomes I/O bound.
[ w_t+1 = w_t - \eta \nabla \ell(w_t; x_t, y_t) ] The edge applies received deltas in order but
where ( \sigma^2 ) is gradient noise variance. This matches the rate of synchronous SGD when ( \tau ) is bounded.