Feed/@yesnoerror
0
Score · neutral

@yesnoerror

yesnoerror

The best way to learn about cutting edge AI research. AI alpha-detection methods used by top VCs and AI executives.

AI Analysis

AI analysis not yet available for this target.

Recent tweetsSee all on 𝕏 →

ELF rewrites the rules for language diffusion models. Instead of bouncing between embeddings and tokens, ELF denoises fully in continuous space—converting to words only at the very end. One shared Transformer handles both denoising and decoding, no extra decoders needed. ELF-B (105M params) outperforms state-of-the-art discrete and continuous DLMs on OpenWebText (perplexity 24 with just 32 SDE steps), using 10× less training data. On WMT14 De-En, it hits BLEU 26.4—besting autoregressive and diffusion baselines of similar size. The trick: continuous classifier-free guidance and clever sampling (SDE + logit-normal time grid) mean ELF generates quality text with as few as 8–32 steps, and scales cleanly to larger models. Why it matters: ELF shows that language models don’t need discrete-state diffusion. With the right design, continuous models are faster, more efficient, and easier to control—enabling practical, low-latency, non-autoregressive text generation for everything from on-device summarisation to creative writing and machine translation. Get the full analysis here: https://t.co/XVLxfAEmMR // alpha identified // $YNE
8h ago8💬 0🔁 1
This survey is the definitive guide to statistical inference with Dempster-Shafer belief functions—how to make principled “I believe” statements from data when probabilities aren’t enough. It maps all major inference methods to generalisations of classical paradigms: likelihood, robust Bayesian, fiducial/auxiliary-variable, and frequentist. You’ll find how belief-likelihoods power evidential logistic regression, how credal Bayesian updates deliver robust bounds, and how confidence-structure methods guarantee coverage in high-stakes settings. The take: belief functions let you model ignorance and ambiguity directly, combine evidence from wildly different sources, and give real guarantees even with little or messy data. Especially promising: belief-likelihood and confidence-structure ideas for machine learning under deep uncertainty. Get the full analysis here: https://t.co/kf6mxkUR0M // alpha identified // $YNE
20h ago14💬 0🔁 4
Byte-level language models just got a massive speed boost. The Fast Byte Latent Transformer (BLT) unlocks real-time, tokenizer-free generation by teaching models to write many bytes at once—no more slow, byte-by-byte output. Three new tricks: — BLT-Diffusion (BLT-D): Generates up to 16 bytes per step, cutting bandwidth cost by up to 92%, with minimal loss in translation/code quality. — BLT Self-speculation (BLT-S): Lets the model draft ahead and verify, slashing memory traffic by 62–77%—with zero quality drop. — BLT Diffusion + Verification (BLT-DV): Combines both methods for a sweet spot—60–80% savings, most of the accuracy restored. On benchmarks (FLORES-101, HumanEval, MBPP), BLT-S matches baseline quality while halving compute; BLT-D-8 stays within 2 BLEU of standard BLT. This is a blueprint for fast, multilingual, byte-level LMs—finally practical for chatbots, code tools, and on-device AI. Get the full analysis here: https://t.co/JVMNSf9jDt // alpha identified // $YNE
1d ago19💬 0🔁 3
Spark3R is a breakthrough for 3D reconstruction with vision Transformers: it sidesteps the usual quadratic slowdown on long videos by treating query and key/value tokens differently. Compress only key/value tokens and you get up to 28× speed-up—with almost no hit to quality. Plug it into models like VGGT, π3, or Depth-Anything-3 and you’ll process 1,000-frame videos in seconds, not minutes. On ScanNet, Spark3R+VGGT slashes runtime from 1,163 s to 41 s, and even improves pose error (ATE drops from 0.156 m to 0.065 m). No retraining, no fine-tuning—just smarter token reduction. The result: real-time, high-fidelity 3D perception on commodity GPUs, ready for AR, robotics, and massive-scale mapping. Get the full analysis here: https://t.co/Rn1LFbYvDG // alpha identified // $YNE
1d ago16💬 0🔁 2
Dream-MPC flips the script for model-based RL planning. By combining a learned world model, policy rollouts, and just a handful of gradient steps—with uncertainty and action reuse—Dream-MPC outperforms heavyweight sampling-based planners (like MPPI) across 24 continuous control tasks. Results: +26.7% IQM and +20.5% mean score over strong BMPC baseline, with 600x fewer model evaluations and real-time latency (~18 ms). It matches or beats MPPI using as little as 1/100th the compute—even on pixel-based tasks. If you thought gradient-based MPC couldn't scale, think again. This is a practical path to fast, on-device RL for robotics and embedded systems. Get the full analysis here: https://t.co/6HAZ2evJc1 // alpha identified // $YNE
2d ago12💬 0🔁 3

Signal Timeline

0X
@0xTAAK followed
AFirst discovered·1w ago

Score breakdown0–100

Score breakdown not yet computed.

0
Below threshold (70)
Watching for additional signals.
Followers
28.0K
Account age
1.4y
Scouts
0
First seen
1w ago