Feinschmecker is designed as a companion for everybody interested in cooking their own meals. It provides tools to retrieve culinary recipes with specified properties (e.g., amount of macronutrients, amount of calories, presence of meat, etc.) based on individual needs. Due to its focus on meal nutrition values, it is especially suitable for people during their weight loss or strength training journey. The possibility of filtering recipes based on the list of available ingredients helps to prevent waste of food.
Feinschmecker uses a knowledge graph created using OWL to answer the following key questions:
- What ingredients are required for a given recipe?
- Which recipes contain at least the specified amount of protein / fats / carbohydrates?
- Which recipes contain at most the specified amount of calories?
- Which recipes are vegetarian / vegan?
- Which recipes are good for breakfast / lunch / dinner
- Which recipes can be made using the list of specified ingredients?
- Which recipes are easy / moderately hard / hard to make?
- Which recipes can be prepared within a specific amount of time?
The easiest way to run the application is using Docker Compose, which sets up all services including the backend, frontend, Redis, and Celery workers.
- Docker and Docker Compose installed
-
Build the local ontology file: Before starting the services, you need to build the knowledge graph. This script fetches the base ontology, merges it with local data, and saves it as
data/feinschmecker.nt.python scripts/build_ontology.pyThis file is used by the application by default.
-
Start all services:
docker compose up -
Or run in detached mode (background):
docker compose up -d -
Access the application:
- Frontend: http://localhost:3000
- Backend API: http://localhost:5000
- API Documentation: http://localhost:5000/apidocs/
The Docker setup includes:
- Backend - Flask API server (port 5000)
- Frontend - Vue.js development server (port 3000)
- Redis - Cache and message broker (port 6379)
- Celery Worker - Background task processing (supports horizontal scaling)
- Celery Beat - Scheduled task scheduler
# View logs
docker compose logs -f
# View logs for a specific service
docker compose logs -f backend
docker compose logs -f celery-worker
# Stop all services
docker compose down
# Stop and remove volumes
docker compose down -v
# Rebuild containers after code changes
docker compose up --build
# Restart a specific service
docker compose restart backend
# Scale Celery workers for increased parallelism (e.g., 3 worker instances)
docker compose up --scale celery-worker=3
# View logs from all worker instances
docker compose logs -f celery-workerCelery workers can be configured for optimal parallelism:
- Concurrency: Number of worker processes per worker container (default: 4)
- Horizontal Scaling: Run multiple worker containers for increased throughput
Configuration Options:
-
Environment Variable (recommended for flexibility):
# Set concurrency per worker (defaults to CPU count if not set) export CELERY_WORKER_CONCURRENCY=4 docker compose up
-
Scale Workers Horizontally:
# Run 3 worker containers, each with concurrency of 4 = 12 parallel tasks docker compose up --scale celery-worker=3 -
Combine Both:
# Set higher concurrency and scale to multiple workers export CELERY_WORKER_CONCURRENCY=8 docker compose up --scale celery-worker=2 # Result: 2 workers × 8 concurrency = 16 parallel tasks
Performance Tips:
- Default concurrency (4) is suitable for most use cases
- Increase worker instances for high-load scenarios rather than just concurrency
- Monitor worker utilization:
docker compose logs -f celery-worker - Tasks are distributed evenly across workers thanks to
prefetch_multiplier=1
You can customize the configuration using environment variables. Create a .env file in the root directory:
FLASK_ENV=development
REDIS_URL=redis://redis:6379/0
# ONTOLOGY_URL=https://example.com/your-custom-ontology.nt # Optional: Defaults to the local data/feinschmecker.nt
CORS_ORIGINS=*
CELERY_WORKER_CONCURRENCY=4 # Number of worker processes per worker container (defaults to CPU count)See .env.example for all available options.
-
Install dependencies:
cd backend pip install -r requirements.txt
-
Run the backend server:
python3 website.pyOr using gunicorn:
gunicorn -c gunicorn.conf.py backend.website:app
-
Install dependencies:
cd webui npm install
-
Run the development server:
npm run dev -
Or build for production:
npm run build npm run preview
Open a browser and go to the address specified in the terminal (typically http://localhost:3000 for the frontend).