class BowenJing:
name = "Bowen Jing (荆博闻)"
role = "Researcher in Generative AI and Embodied Intelligence"
current_focus = [
"Generative World Models",
"Embodied Intelligence",
"Autonomous Driving",
"Robotics",
"Counterfactual Reasoning",
]
mission = (
"Build AI systems that model, understand, and interact with "
"the real world."
)Researching generative world models for embodied intelligence, with a focus on diffusion-based simulation of interactive environments.
$ neofetch bowen_jing
bowen@research-lab
------------------
OS: Manchester / Embodied Intelligence Lab
Kernel: Generative World Models
Shell: zsh
Uptime: Always building
Host: Autonomous Driving + Robotics
IDE: VS Code / Jupyter / Terminal
Languages: Python, C++, TypeScript, SQL, LaTeX
Learning: Diffusion, World Models, Causality, Planning
Status: shipping ideas into simulators
Motto: "model the world, then move in it"[identity]
name = "Bowen Jing"
title = "Researcher in Generative AI and Embodied Intelligence"
location = "University of Manchester"
mode = "researcher-builder"
[core]
question = "How can generative models learn world dynamics for embodied decision-making?"
stack = ["diffusion", "world_models", "causal_reasoning", "robotics", "autonomous_driving"]
taste = ["first_principles", "clean_systems", "interactive_simulation"]
[runtime]
currently_building = [
"diffusion-based world simulation",
"interactive environment modeling",
"counterfactual reasoning for agents",
]
long_term_goal = "true intelligence grounded in physical environments"[focus] Generative world models for embodied agents
[shipping] Driving scene generation / trajectory modeling / simulation realism
[debugging] How intelligence emerges from internal models of the world
[obsession] Prediction, causality, planning, interaction
[energy] high
My research focuses on generative world models for embodied agents. I study how diffusion-based generative models can model and simulate complex interactive environments, enabling intelligent agents to reason, predict, and act in the physical world.
My work sits at the intersection of:
Generative Modelsfor world simulation, prediction, and controllable generationWorld Modelsfor internal representations of dynamic physical environmentsEmbodied Intelligencefor agents that perceive, plan, and interactAutonomous Driving and Roboticsas high-impact real-world testbeds
The long-term goal is to develop AI systems that understand and interact with the real world, moving beyond perception toward true intelligence grounded in physical environments.
How can generative models learn the dynamics of the real world and support intelligent decision-making for embodied agents?
- Diffusion and generative models for scene generation
- Future prediction in interactive environments
- Multi-agent interaction modeling
- Simulation environments for safety-critical decision making
- Asking:
What would happen if an agent behaved differently? - Causal analysis for safer autonomy
- Scenario rollouts for decision evaluation
- Stress testing systems under alternative futures
- Autonomous driving systems with stronger environment understanding
- Robotics agents with grounded planning and interaction
- Interactive agents that reason over dynamics instead of static snapshots
- Long-horizon decision making in complex physical worlds
My work is driven by first-principles thinking:
- What is an environment?
- What is intelligence?
- What is a decision?
I combine ideas from Cognitive Science, Generative Modeling, and Causal Reasoning to study how AI systems can build an internal world model and use it for planning, prediction, and interaction.
|
|
research_process = {
"inputs": [
"videos",
"trajectories",
"multimodal observations",
"interactive environments",
],
"compiler": [
"first-principles thinking",
"causal reasoning",
"diffusion modeling",
"embodied decision making",
],
"outputs": [
"world models",
"counterfactual rollouts",
"safer autonomous systems",
"agents that can plan in the real world",
],
}def main():
while True:
observe_world()
model_dynamics()
imagine_futures()
evaluate_counterfactuals()
build_intelligence()$ sudo apt install true-intelligence
[sudo] password for bowen:
Reading package lists... Done
Building dependency tree... Done
The following new packages will be installed:
world-models causality embodied-agents long-horizon-planning
0 upgraded, 4 newly installed, 0 to remove.
Setting up world-models...
Setting up causality...
Setting up embodied-agents...
Setting up long-horizon-planning...
>> system ready for reality🎉🎉* A comprehensive guide to explainable ai: From classical models to llms
2024 · arXiv preprint arXiv:2412.00800, 2024
PDF · Citations 9
🎉🎉* StyleDrive: Towards Driving-Style Aware Benchmarking of End-To-End Autonomous Driving
2025 · arXiv preprint arXiv:2506.23982, 2025
PDF · Citations 1
🎉🎉* Generative Adversarial Networks Bridging Art and Machine Intelligence
2025 · arXiv preprint arXiv:2502.04116, 2025
PDF · Citations 1
🎉🎉* Deep Learning Model Security: Threats and Defenses
2024 · arXiv preprint arXiv:2412.08969, 2024
PDF · Citations 1
🎉🎉* Deep Learning, Machine Learning, Advancing Big Data Analytics and Management
2024 · arXiv preprint arXiv:2412.02187, 2024
PDF · Citations 1
languages:
- Python
- C++
- TypeScript
- SQL
- LaTeX
research:
- Diffusion Models
- World Models
- Autonomous Driving
- Robotics
- RL
- Vision-Language Models
tooling:
- PyTorch
- Hugging Face
- TensorFlow
- CARLA
- Docker
- W&B