Tohoku University built a physics-grounded AI that predicts ionic dielectric tensors by first computing Born effective charges and phonon properties, then combining them via a physical formula — outperforming conventional black-box ML. Screening 8,000+ oxides, it identified 31 previously unknown high-dielectric candidates that could enable smaller, more efficient capacitors in smartphones and computers. Published in Physical Review X, April 2026.
University of New Hampshire scientists used AI to systematically mine scientific literature, building the public NEMAD database of 67,573 magnetic materials, and identified 25 previously unreported compounds that stay ferromagnetic above 180 degrees C — made from abundant iron, manganese, and nitrogen rather than scarce rare earths. The discovery directly targets the strategic supply bottleneck for EV motors and wind turbines, which currently depend on China-controlled rare-earth element exports. Published in Nature Communications, February 2026.
Rice University scientists demonstrated the first use of AI/ML for designing genetic circuits, showing that ML models trained on high-throughput data from the CLASSIC platform are significantly more accurate than traditional physics-based models at predicting how circuits behave in human embryonic kidney cells. The finding upends the assumption that explicit biological knowledge must be encoded — data-scale ML simply learns better predictors. The advance could dramatically accelerate the synthetic biology design-build-test cycle.
Researchers at Emory University used a physics-tailored neural network to analyze dusty plasma (the fourth state of matter) and discovered non-reciprocal particle forces with over 99% accuracy. The AI overturned a decades-old assumption: particle size DOES affect how quickly inter-particle forces weaken, and a leading particle attracts the trailing one while the trailing particle always repels the leading one. The framework runs on a desktop and may generalize to biological many-body systems.
Sandia National Laboratories scientists published a paper in Nature Machine Intelligence showing neuromorphic chips can efficiently solve partial differential equations (PDEs) — math previously reserved for traditional supercomputers. For 12 years after the cortical network model was introduced, no one noticed its non-obvious link to PDEs; the new algorithm exploits that link and opens the path to the world's first neuromorphic supercomputer. This directly challenges the assumption that neuromorphic hardware is only good at pattern recognition.
Tufts University researchers combined symbolic reasoning with neural networks in a neuro-symbolic VLA for robotics tasks, achieving a 95% task success rate vs. 34% for standard systems, while cutting training energy to 1% and operational energy to 5% of conventional approaches. The approach mimics how humans solve problems by breaking them into steps and abstract categories rather than raw pattern matching. The paper will be presented at ICRA 2026 in Vienna.
MIT Technology Review named mechanistic interpretability — the science of mapping features and circuits inside large language models — as a top 2026 breakthrough technology. Anthropic research traced the entire path a model takes from prompt to response by identifying recognizable concept-features and the weighted pathways between them. Foundational challenges remain: feature still lacks a rigorous definition and many interpretability queries are computationally intractable.
Google DeepMind unveiled Aeneas in July 2025, trained on nearly 200,000 Roman inscriptions capable of predicting missing words in damaged Latin texts, estimating when and where they were created, and revealing historical links between texts across the Roman Empire. Similar AI tools have now translated hundreds of thousands of Hanja articles in months and decoded the first words from the Herculaneum scrolls buried by Vesuvius. The tools surface hypotheses, not ground truth: scholarly verification is still required.
Researchers at Ohio State University created shiitake mycelium-based memristors operating at ~5,850 Hz with ~90% accuracy, while Cornell built biohybrid robots embedding mycelium sensors that respond to light and chemical stimuli. The roadmap projects 2025-2028 as the era of enhanced fungal-computing prototypes for edge cases like remote sensors, space hardware, and eco-bots. Serious hurdles remain: environmental sensitivity, speed limitations, and difficulty bridging living tissue to conventional electronics.