Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
node_modules
npm-debug.log
dist
.git
.gitignore
.DS_Store
*.log
42 changes: 42 additions & 0 deletions .github/workflows/deploy-containerapp.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
name: Deploy Container App

on:
push:
branches: [main]

permissions:
contents: read

jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4

- name: Azure Login
uses: azure/login@v2
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
client-secret: ${{ secrets.AZURE_CLIENT_SECRET }}

- name: Build and push image
run: |
ACR=${{ secrets.ACR_NAME }}
IMAGE="$ACR.azurecr.io/resume-web:${{ github.sha }}"
az acr login --name "$ACR"
docker build -f Dockerfile.chat -t "$IMAGE" .
docker push "$IMAGE"
echo "IMAGE=$IMAGE" >> $GITHUB_ENV

- name: Update Container App
run: |
az containerapp update \
--name ${{ secrets.CONTAINERAPP_NAME }} \
--resource-group ${{ secrets.RESOURCE_GROUP }} \
--image "$IMAGE" \
--set-env-vars \
AZURE_OPENAI_API_KEY=${{ secrets.AZURE_OPENAI_API_KEY }} \
AZURE_OPENAI_ENDPOINT=${{ secrets.AZURE_OPENAI_ENDPOINT }}
17 changes: 17 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
FROM node:20-alpine AS builder

WORKDIR /app

COPY package.json package-lock.json ./
RUN npm ci

COPY . .
RUN npm run build

FROM nginx:alpine

COPY --from=builder /app/docs /usr/share/nginx/html

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]
24 changes: 24 additions & 0 deletions Dockerfile.chat
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
FROM node:20-alpine AS build

WORKDIR /app

COPY package.json package-lock.json ./
RUN npm ci

COPY tailwind.css postcss.config.js tailwind.config.js ./
COPY docs ./docs
RUN npm run build

FROM python:3.11-slim

WORKDIR /app

COPY cv_chat/requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

COPY cv_chat ./cv_chat
COPY --from=build /app/docs ./docs

ENV PORT=8080

CMD ["gunicorn", "-w", "2", "-b", "0.0.0.0:8080", "cv_chat.app:app"]
92 changes: 92 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,98 @@ Only generate CSS that is used on the page, which results in a much smaller file
npm run build
```

Tutorial: What Was Done
---------

This project was updated and deployed as a static résumé site. Below is a concise walkthrough of the changes and deployment flow so you can repeat it.

### 1) Update the content

- Edit `docs/index.html` and replace the template content with your CV details.
- Keep the overall structure (sections, headers, list items) so the layout stays consistent.

### 2) Tailwind v4 + PostCSS fixes

Tailwind v4 moved its PostCSS plugin to a separate package. The build pipeline was adjusted accordingly:

```
npm install @tailwindcss/postcss
```

In `postcss.config.js`, swap the Tailwind plugin for the new one and pass the config path:

```
require("@tailwindcss/postcss")({
config: "./tailwind.config.js"
})
```

In `tailwind.css`, use the v4 import style:

```
@import "tailwindcss";
```

### 3) Two-column layout

To force two columns on all screen sizes, use `col-count-2` (without the `md:` prefix) on the column container in `docs/index.html`:

```
<div class="col-count-2 print:col-count-2 col-gap-md h-letter-col print:h-letter-col col-fill-auto">
```

### 4) Replace deprecated CSS

Autoprefixer warns about `color-adjust` being deprecated. Update this in `tailwind.css`:

```
print-color-adjust: exact !important;
```

### 5) Docker production container

A production Docker image builds the CSS and serves the site via Nginx.

```
docker build -t universal-resume .
docker run --rm -p 8080:80 universal-resume
```

### 6) Azure Static Website deployment

This setup uses Azure Storage Static Website hosting.

Install Azure CLI (Windows):

```
https://aka.ms/installazurecliwindows
```

Login (PowerShell needs the `&` call operator):

```
& "C:\Program Files\Microsoft SDKs\Azure\CLI2\wbin\az.cmd" login --use-device-code
```

Then create resources and upload:

```
$rg = "universal-resume-rg"
$loc = "germanywestcentral"
$sa = "resumestatic" + (Get-Random -Maximum 99999)

& "C:\Program Files\Microsoft SDKs\Azure\CLI2\wbin\az.cmd" group create --name $rg --location $loc
& "C:\Program Files\Microsoft SDKs\Azure\CLI2\wbin\az.cmd" storage account create --name $sa --resource-group $rg --location $loc --sku Standard_LRS --kind StorageV2 --allow-blob-public-access true
& "C:\Program Files\Microsoft SDKs\Azure\CLI2\wbin\az.cmd" storage blob service-properties update --account-name $sa --static-website --index-document index.html --404-document index.html

$web = & "C:\Program Files\Microsoft SDKs\Azure\CLI2\wbin\az.cmd" storage account show --name $sa --resource-group $rg --query "primaryEndpoints.web" -o tsv

$key = & "C:\Program Files\Microsoft SDKs\Azure\CLI2\wbin\az.cmd" storage account keys list --account-name $sa --resource-group $rg --query "[0].value" -o tsv
& "C:\Program Files\Microsoft SDKs\Azure\CLI2\wbin\az.cmd" storage blob upload-batch --account-name $sa --account-key $key --destination '$web' --source .\docs

$web
```

Starting Point
---------

Expand Down
2 changes: 2 additions & 0 deletions cv_chat/.gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Auto detect text files and perform LF normalization
* text=auto
17 changes: 17 additions & 0 deletions cv_chat/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@

# dependencies
/.ipynb_checkpoints
/.pnp
.pnp.js

# misc
.DS_Store
.env
.env.local
.env.development.local
.env.test.local
.env.production.local

npm-debug.log*
yarn-debug.log*
yarn-error.log*
53 changes: 53 additions & 0 deletions cv_chat/andrei_context.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
Name: Andrei Sirazitdinov
Location: Mannheim, Germany
Contact: dyh@list.ru | +49 176 47699707

About:
- Ph.D. in Computer Science
- University of Heidelberg | Data Science, ML, AI, Computer Vision
- As a researcher, I was focused on causal inference and individualized treatment effect estimation with deep learning. I also worked on explainable AI for healthcare and vision tasks. Permanent residency and full work authorization in Germany. Open to relocation.

Research interests:
- Causal inference, individualized treatment assignment, explainable ML
- Built and evaluated deep learning models (MLPs, GANs, VAEs, GNNs) in TensorFlow, Keras, and PyTorch for healthcare decision support, privacy, and explainability.

Experience:
- University of Heidelberg, Germany (Apr 2020 – Oct 2025) — Ph.D. Candidate
- Individualized treatment assignment with MLPs, GANs, VAEs, and GNNs in TensorFlow/Keras.
- Pain patient treatment strategies using K-NN clustering and XGBoost with RCT validation.
- Explainable pathology patch classification with prototype learning and decision trees in PyTorch.
- Digital twin models using stable diffusion for patient data privacy in PyTorch.
- Dropout prediction model for cancer patients achieving 80% precision in TensorFlow.
- National Institute of Informatics, Japan (Oct 2018 – Apr 2019) — Research Intern
- Computer vision algorithm for long-term video prediction using PyTorch.
- Irkutsk Branch of MSTUCA, Russia (Oct 2014 – Jul 2015) — Student Assistant
- Built a multi-camera system to capture helicopter panel data and populate digital tables automatically.

Education:
- Ph.D. in Computer Science — University of Heidelberg, Germany (2020 – 2025)
- M.Sc. in Visual Computing — Saarland University, Germany (2016 – 2019)
- B.Sc. in Applied Informatics and Mathematics — Irkutsk State University, Russia (2012 – 2016)
- High School — Irkutsk, Russia (2010 – 2012)

Projects:
- Trading Bots (Python) — WebSocket API | Docker | GitHub Actions
- Local ChatGPT Clone — Streamlit | OpenAI API | SQLite
- Databricks Integration — Centralized storage for chats and user authentication

Skills:
- ML/AI: TensorFlow, PyTorch, Scikit-learn, Keras
- Data tools: Pandas, NumPy
- Tools and platforms: Python, SQL, R, Docker, Kubernetes, MLflow, Azure (beginner), Git, Linux, Windows

Languages:
- English (Fluent speaking, reading, and writing)
- German (B1 certificate, completed C1 courses)
- Russian (Native)

Selected publications:
- A. Sirazitdinov et al., "Graph Neural Networks for Individual Treatment Effect Estimation," IEEE Access, 2024.
- A. Sirazitdinov et al., "Review of Deep Learning Methods for Individual Treatment Effect Estimation with Automatic Hyperparameter Optimization," TechRxiv, 2022.

Git Repos:
- https://github.com/dyh1265/Causal-Inference-Library — Causal inference library for GNN-TARNet and related models; includes notebooks and code for the GNN-ITE paper.
- https://github.com/dyh1265/PerPain-allocation — PerPain patient allocation system with two approaches: an R Shiny clustering app and Python ML models (TARnet, GNN-TARnet, GAT-TARnet, T-Learner) for individualized treatment recommendations.
124 changes: 124 additions & 0 deletions cv_chat/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
import os
import re
from html import unescape
from html.parser import HTMLParser
from pathlib import Path

import dotenv
from flask import Flask, request, send_from_directory
from openai import AzureOpenAI

dotenv.load_dotenv()

BASE_DIR = Path(__file__).resolve().parent
DOCS_DIR = BASE_DIR.parent / "docs"

app = Flask(__name__, static_folder=str(DOCS_DIR), static_url_path="")

client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-02-15-preview",
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
)



context_path = BASE_DIR / "andrei_context.txt"


class _CVTextParser(HTMLParser):
def __init__(self):
super().__init__()
self._chunks = []
self._skip = False

def handle_starttag(self, tag, attrs):
if tag in {"script", "style"}:
self._skip = True
if tag in {"h1", "h2", "h3", "p", "li"}:
self._chunks.append("\n")

def handle_endtag(self, tag):
if tag in {"script", "style"}:
self._skip = False
if tag in {"h1", "h2", "h3", "p", "li"}:
self._chunks.append("\n")

def handle_data(self, data):
if self._skip:
return
text = data.strip()
if text:
self._chunks.append(text)

def text(self):
raw = " ".join(self._chunks)
normalized = re.sub(r"[ \t]+", " ", raw)
normalized = re.sub(r"\n{2,}", "\n", normalized)
return normalized.strip()


def build_context_from_cv(cv_path: Path, extra_context: str) -> str:
if not cv_path.exists():
return extra_context
parser = _CVTextParser()
parser.feed(cv_path.read_text(encoding="utf-8"))
cv_text = unescape(parser.text())
cv_text = re.sub(r"[ \t]+\n", "\n", cv_text)

git_block = ""
if "Git Repos:" in extra_context:
parts = extra_context.split("Git Repos:", 1)
git_block = "Git Repos:\n" + parts[1].strip()

if git_block:
return f"{cv_text}\n\n{git_block}"
return cv_text


if context_path.exists():
raw_context = context_path.read_text(encoding="utf-8").strip()
else:
raw_context = ""

andrei_context = build_context_from_cv(DOCS_DIR / "index.html", raw_context)

conversation = [
{
"role": "system",
"content": (
"You are a concise assistant that only answers questions about Andrei Sirazitdinov's skills, competencies, prrojects, and background. "
"Reply in short bullet points (no bold). If the question is not about Andrei's skills, competencies, projects, and background, say you can only answer "
"questions about his skills, competencies, projects, and background and ask the user to rephrase."
+ (f"\n\nContext about Andrei:\n{andrei_context}" if andrei_context else "")
),
}
]

@app.route("/")
def index():
return send_from_directory(DOCS_DIR, "index.html")


@app.route("/<path:filename>")
def static_files(filename):
return send_from_directory(DOCS_DIR, filename)

@app.route("/chat", methods=["POST"])
def chat():
user_input = request.json.get("message")
conversation.append({"role": "user", "content": user_input})

response = client.chat.completions.create(
model="gpt-5.2-chat",
messages=conversation
)

reply = response.choices[0].message.content
conversation.append({"role": "assistant", "content": reply})

return reply

if __name__ == '__main__':
port = int(os.getenv("PORT", "8080"))
app.run(host="0.0.0.0", port=port, debug=False)
4 changes: 4 additions & 0 deletions cv_chat/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
flask==3.0.2
python-dotenv==1.0.1
openai==1.59.7
gunicorn==22.0.0
Loading