Skip to content
View Bowen12137's full-sized avatar

Block or report Bowen12137

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Bowen12137/README.md

Bowen Jing

Generative AI x World Models x Embodied Intelligence

Typing intro
Website Email LinkedIn GitHub Hugging Face

vibe field mode base

whoami.py

class BowenJing:
    name = "Bowen Jing (荆博闻)"
    role = "Researcher in Generative AI and Embodied Intelligence"
    current_focus = [
        "Generative World Models",
        "Embodied Intelligence",
        "Autonomous Driving",
        "Robotics",
        "Counterfactual Reasoning",
    ]
    mission = (
        "Build AI systems that model, understand, and interact with "
        "the real world."
    )

Researching generative world models for embodied intelligence, with a focus on diffusion-based simulation of interactive environments.


neofetch

$ neofetch bowen_jing

bowen@research-lab
------------------
OS: Manchester / Embodied Intelligence Lab
Kernel: Generative World Models
Shell: zsh
Uptime: Always building
Host: Autonomous Driving + Robotics
IDE: VS Code / Jupyter / Terminal
Languages: Python, C++, TypeScript, SQL, LaTeX
Learning: Diffusion, World Models, Causality, Planning
Status: shipping ideas into simulators
Motto: "model the world, then move in it"

research.toml

[identity]
name = "Bowen Jing"
title = "Researcher in Generative AI and Embodied Intelligence"
location = "University of Manchester"
mode = "researcher-builder"

[core]
question = "How can generative models learn world dynamics for embodied decision-making?"
stack = ["diffusion", "world_models", "causal_reasoning", "robotics", "autonomous_driving"]
taste = ["first_principles", "clean_systems", "interactive_simulation"]

[runtime]
currently_building = [
  "diffusion-based world simulation",
  "interactive environment modeling",
  "counterfactual reasoning for agents",
]
long_term_goal = "true intelligence grounded in physical environments"

~/now

[focus]      Generative world models for embodied agents
[shipping]   Driving scene generation / trajectory modeling / simulation realism
[debugging]  How intelligence emerges from internal models of the world
[obsession]  Prediction, causality, planning, interaction
[energy]     high

Research Identity

My research focuses on generative world models for embodied agents. I study how diffusion-based generative models can model and simulate complex interactive environments, enabling intelligent agents to reason, predict, and act in the physical world.

My work sits at the intersection of:

  • Generative Models for world simulation, prediction, and controllable generation
  • World Models for internal representations of dynamic physical environments
  • Embodied Intelligence for agents that perceive, plan, and interact
  • Autonomous Driving and Robotics as high-impact real-world testbeds

The long-term goal is to develop AI systems that understand and interact with the real world, moving beyond perception toward true intelligence grounded in physical environments.


Research Theme

Generative AI -> World Models -> Embodied Intelligence

How can generative models learn the dynamics of the real world and support intelligent decision-making for embodied agents?

01. Generative World Models

  • Diffusion and generative models for scene generation
  • Future prediction in interactive environments
  • Multi-agent interaction modeling
  • Simulation environments for safety-critical decision making

02. Counterfactual Reasoning

  • Asking: What would happen if an agent behaved differently?
  • Causal analysis for safer autonomy
  • Scenario rollouts for decision evaluation
  • Stress testing systems under alternative futures

03. Embodied Intelligence

  • Autonomous driving systems with stronger environment understanding
  • Robotics agents with grounded planning and interaction
  • Interactive agents that reason over dynamics instead of static snapshots
  • Long-horizon decision making in complex physical worlds

Research Philosophy

My work is driven by first-principles thinking:

  • What is an environment?
  • What is intelligence?
  • What is a decision?

I combine ideas from Cognitive Science, Generative Modeling, and Causal Reasoning to study how AI systems can build an internal world model and use it for planning, prediction, and interaction.


Current Radar

🔥 Building

  • Diffusion-based world simulation
  • Driving scene and trajectory generation
  • Embodied agents for interactive environments

🧪 Exploring

  • Counterfactual world modeling
  • Cognitive foundations of intelligence
  • Planning under uncertainty

system_monitor.py

research_process = {
    "inputs": [
        "videos",
        "trajectories",
        "multimodal observations",
        "interactive environments",
    ],
    "compiler": [
        "first-principles thinking",
        "causal reasoning",
        "diffusion modeling",
        "embodied decision making",
    ],
    "outputs": [
        "world models",
        "counterfactual rollouts",
        "safer autonomous systems",
        "agents that can plan in the real world",
    ],
}

if __name__ == "__main__":

def main():
    while True:
        observe_world()
        model_dynamics()
        imagine_futures()
        evaluate_counterfactuals()
        build_intelligence()

Terminal Aesthetic

$ sudo apt install true-intelligence
[sudo] password for bowen:
Reading package lists... Done
Building dependency tree... Done
The following new packages will be installed:
  world-models causality embodied-agents long-horizon-planning
0 upgraded, 4 newly installed, 0 to remove.

Setting up world-models...
Setting up causality...
Setting up embodied-agents...
Setting up long-horizon-planning...

>> system ready for reality

Publications

🎉🎉* A comprehensive guide to explainable ai: From classical models to llms
2024 · arXiv preprint arXiv:2412.00800, 2024
PDF · Citations 9

🎉🎉* StyleDrive: Towards Driving-Style Aware Benchmarking of End-To-End Autonomous Driving
2025 · arXiv preprint arXiv:2506.23982, 2025
PDF · Citations 1

🎉🎉* Generative Adversarial Networks Bridging Art and Machine Intelligence
2025 · arXiv preprint arXiv:2502.04116, 2025
PDF · Citations 1

🎉🎉* Deep Learning Model Security: Threats and Defenses
2024 · arXiv preprint arXiv:2412.08969, 2024
PDF · Citations 1

🎉🎉* Deep Learning, Machine Learning, Advancing Big Data Analytics and Management
2024 · arXiv preprint arXiv:2412.02187, 2024
PDF · Citations 1


Stack

languages:
  - Python
  - C++
  - TypeScript
  - SQL
  - LaTeX
research:
  - Diffusion Models
  - World Models
  - Autonomous Driving
  - Robotics
  - RL
  - Vision-Language Models
tooling:
  - PyTorch
  - Hugging Face
  - TensorFlow
  - CARLA
  - Docker
  - W&B
skill icons

Connect

Portfolio Email LinkedIn Google Scholar


Long-term Vision

To build AI systems that can model, understand, and interact with the real world, enabling the emergence of general intelligence grounded in physical environments.


predict the futureunderstand causalityinteract with the worldplan under uncertainty

Pinned Loading

  1. AIR-THU/StyleDrive AIR-THU/StyleDrive Public

    [AAAI2026 Oral] Official implementation of "StyleDrive: Towards Driving-Style Aware Benchmarking of End-To-End Autonomous Driving"

    Python 123 8

  2. Awesome-World-Models Awesome-World-Models Public

    This repository is the collection of World model Papers

    59