Skip to content

Idea: new PyPi classifiers and packaging everything up with pip as standard way of sharing architectures #217

@SamuelMarks

Description

@SamuelMarks

What is your opinion on this, that I originally posted almost 3 years ago? keras-team/keras#15762

There are a huge number of new statistical, machine-learning and artificial intelligence solutions being released every month.

Most are open-source and written in a popular Python framework like TensorFlow, JAX, or PyTorch.

In order to 'guarantee' you are using the best [for given metric(s)] solution for your dataset, some way of automatically adding these new statistical, machine-learning and artificial intelligence solutions to your automated pipeline needs to be created.

(additionally: useful for testing your new optimiser, loss function, &etc. across a zoo of datasets)

Ditto for transfer learning models. A related problem is automatically putting ensemble networks together. Something like:

import some_broke_arch  # pip install some_broke_arch
import other_neat_arch  # pip install other_neat_arch
import horrible_v_arch  # builtin to keras

model   = some_broke_arch.get_arch(   **standard_arch_params  )
metrics = other_neat_arch.get_metrics(**standard_metric_params)
loss    = horrible_v_arch.get_loss(   **standard_loss_params  )

model.compile(loss=loss, optimizer=keras.optimizers.RMSprop, metrics=metrics)
print(model.summary())
# &etc.

In summary, I am petitioning for standard ways of:

0. exposing algorithms for consumption;

1. combining algorithms;

2. comparing algorithms.

To that end, I would recommend encouraging the PyPi folk to add a few new classifiers, and a bunch of us trawl through GitHub every month sending PRs to random repositories—associated with academic papers—linking up with CI/CD so that they are now installable with pip install and searchable by classifier on PyPi.

Related, my open-source multi-ML meta-framework:

  • uses builtin ast and inspect modules to traverse the module, class, and function hierarchy for 10 popular open-source ML/AI frameworks;

  • will enable experimentation with entire 'search-space' of all these ML frameworks (every transfer learning model, optimiser, loss function, &etc.)

[…]with a standard way of sharing architectures will be able to expand the 'search-space' with community contributed solutions.

Related:

IMHO there are a number of advantages to using existing approaches to finding and installing components of machine-learning models (and ensemble-able models).

Would appreciate your perspective (@bhack referenced your project)

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions