Skip to content

Config model path requires load from local #200

@wlm64

Description

@wlm64

The quick start examples don't seem to be working properly.

Loading test dataset from: dataset/nq/test.jsonl...
Traceback (most recent call last):
File "/home/jovyan/FlashRAG/examples/quick_start/simple_pipeline.py", line 40, in
pipeline = SequentialPipeline(config, prompt_template=prompt_templete)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/FlashRAG/flashrag/pipeline/pipeline.py", line 53, in init
self.generator = get_generator(config)
^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/FlashRAG/flashrag/utils/utils.py", line 48, in get_generator
with open(os.path.join(config["generator_model_path"], "config.json"), "r") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'meta-llama/Meta-Llama-3-8B-Instruct/config.json'

It seems to only look for the model locally, but upon passing the direct model path there's a model import error later in the flow.

I was able to work around this by passing the config path directly in utils.py and using the hf path in the config but this seems janky. Is there a fix for this?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions