Skip to content

Training Boltzmann machines in Wiebe et. al #14

@apozas

Description

@apozas

In appendix E, section 5 of Quantum Deep Learning, the authors write:

[...] our quantum algorithms [...] can therefore efficiently train full Boltzmann machines given that the mean-field approximation to the Gibbs state has only polynomially small overlap with the true Gibbs state.

However, by reading the rest of the article one is lead to the idea that using mean-field theory is a good enough state (better than a uniform state) to be used as a prior in the network.

Does this phrase therefore have a typo or am I missing something?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions