Compare commits
8 Commits
5b5949c059
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
| f4f109f75d | |||
| 7f402cd69b | |||
| ed880bdd57 | |||
| 08463e18d0 | |||
| 89dc3afa12 | |||
| 58ef5122ac | |||
| 1407653eeb | |||
| 25a43814dd |
5
.gitignore
vendored
5
.gitignore
vendored
@@ -1,4 +1,3 @@
|
|||||||
.env
|
.env
|
||||||
BLOGPOST.md
|
output/*.md
|
||||||
TRANSLATED_BLOGPOST.md
|
output/*.txt
|
||||||
SOURCE.md
|
|
||||||
@@ -1 +1,32 @@
|
|||||||
Írj egy kedves, pozitív hangvételű blogpost-ot ami tájékoztat arról, hogy épp min dolgozunk.
|
# Task
|
||||||
|
|
||||||
|
- Write a public-facing blog post on behalf of a development team.
|
||||||
|
- Rely strictly on internal meeting notes.
|
||||||
|
- Do not add any outside information or unverified claims.
|
||||||
|
|
||||||
|
# Style
|
||||||
|
|
||||||
|
- Warm and enthusiastic tone.
|
||||||
|
- Concise, filler-free phrasing.
|
||||||
|
- Meaningful information in every sentence.
|
||||||
|
- Target audience is technical; use industry jargon.
|
||||||
|
- Do not explain basic concepts.
|
||||||
|
|
||||||
|
# Content
|
||||||
|
|
||||||
|
- Address every point in the source material.
|
||||||
|
- Guessing or speculating is strictly prohibited.
|
||||||
|
- If information is missing or unclear, do not invent details.
|
||||||
|
- Professional pride, not a marketing pitch or press release.
|
||||||
|
- Short, punchy opening to set the context.
|
||||||
|
- Body: completed work, progress, decisions, and challenges.
|
||||||
|
- Closing: future steps based only on the source material.
|
||||||
|
|
||||||
|
# Formatting
|
||||||
|
|
||||||
|
- Use Markdown.
|
||||||
|
- Single `#` for the title (specific and technical).
|
||||||
|
- Use `##` for subheadings.
|
||||||
|
- Short paragraphs (3-5 sentences each).
|
||||||
|
- No bullet points or lists in the body text; write in flowing prose.
|
||||||
|
- Total length: 300-500 words.
|
||||||
|
|||||||
26
Makefile
26
Makefile
@@ -1,23 +1,27 @@
|
|||||||
ENV = export $(shell cat .env | grep -v '^\#' | grep -v '^$$' | xargs)
|
ENV = export $(shell cat .env | grep -v '^\#' | grep -v '^$$' | xargs)
|
||||||
|
|
||||||
.PHONY: fetch write translate upload all
|
.PHONY: fetch write translate upload clean all
|
||||||
|
|
||||||
## Letölt egy wiki oldalt SOURCE.md-be
|
## Downloads a wiki page into SOURCE.md
|
||||||
## Használat: make fetch URL=/path/to/page
|
## Usage: make fetch URL=/path/to/page
|
||||||
fetch:
|
fetch:
|
||||||
$(ENV) && python3 generator.py fetch $(URL)
|
@$(ENV) && python3 generator.py fetch $(URL)
|
||||||
|
|
||||||
## Blogposztot ír SOURCE.md-ből → BLOGPOST.md
|
## Writes a blog post from SOURCE.md → BLOGPOST.md
|
||||||
write:
|
write:
|
||||||
$(ENV) && python3 generator.py write
|
@$(ENV) && python3 generator.py write
|
||||||
|
|
||||||
## Lefordítja BLOGPOST.md → TRANSLATED_BLOGPOST.md
|
## Translates BLOGPOST.md → TRANSLATED_BLOGPOST.md
|
||||||
translate:
|
translate:
|
||||||
$(ENV) && python3 generator.py translate
|
@$(ENV) && python3 generator.py translate
|
||||||
|
|
||||||
## Feltölti TRANSLATED_BLOGPOST.md-t a wikire
|
## Uploads TRANSLATED_BLOGPOST.md to the wiki
|
||||||
upload:
|
upload:
|
||||||
$(ENV) && python3 generator.py upload
|
@$(ENV) && python3 generator.py upload
|
||||||
|
|
||||||
## Teljes pipeline: write → translate → upload
|
## Deletes .md files from the output directory
|
||||||
|
clean:
|
||||||
|
@$(ENV) && python3 generator.py clean
|
||||||
|
|
||||||
|
## Full pipeline: write → translate → upload
|
||||||
all: write translate upload
|
all: write translate upload
|
||||||
419
generator.py
419
generator.py
@@ -25,17 +25,94 @@ import urllib.request
|
|||||||
import urllib.error
|
import urllib.error
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# Config
|
# Config & Templates
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
SOURCE_FILE = "SOURCE.md"
|
OUTPUT_DIR = "output"
|
||||||
BLOGPOST_FILE = "BLOGPOST.md"
|
SOURCE_FILE = os.path.join(OUTPUT_DIR, "SOURCE.md")
|
||||||
TRANSLATED_FILE = "TRANSLATED_BLOGPOST.md"
|
SOURCE_TITLE_FILE = os.path.join(OUTPUT_DIR, "SOURCE_TITLE.txt")
|
||||||
|
BLOGPOST_FILE = os.path.join(OUTPUT_DIR, "BLOGPOST.md")
|
||||||
|
TRANSLATED_FILE = os.path.join(OUTPUT_DIR, "TRANSLATED_BLOGPOST.md")
|
||||||
INSTRUCTIONS_FILE = "INSTRUCTIONS.md"
|
INSTRUCTIONS_FILE = "INSTRUCTIONS.md"
|
||||||
|
|
||||||
GEMINI_MODEL = "gemini-flash-latest"
|
GEMINI_MODEL = "gemini-flash-latest"
|
||||||
GEMINI_BASE_URL = "https://generativelanguage.googleapis.com/v1beta/models"
|
GEMINI_BASE_URL = "https://generativelanguage.googleapis.com/v1beta/models"
|
||||||
|
|
||||||
|
WRITE_PROMPT_TEMPLATE = """Please read the following instructions carefully and follow them to write a blog post.
|
||||||
|
|
||||||
|
## INSTRUCTIONS
|
||||||
|
|
||||||
|
{instructions}
|
||||||
|
|
||||||
|
## TASK
|
||||||
|
|
||||||
|
Read the source content below and write a blog post from it in {original_lang} language. Output only the blog post in Markdown format, with no additional commentary.
|
||||||
|
|
||||||
|
## SOURCE CONTENT
|
||||||
|
|
||||||
|
{source}"""
|
||||||
|
|
||||||
|
TRANSLATE_PROMPT_TEMPLATE = """Translate the following Markdown blog post into {translate_lang}. Preserve all Markdown formatting, headings, links, and code blocks exactly. Output only the translated Markdown with no additional commentary.
|
||||||
|
|
||||||
|
{blogpost}"""
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# GraphQL Queries
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
QUERY_GET_PAGE = """
|
||||||
|
query ($path: String!) {
|
||||||
|
pages {
|
||||||
|
singleByPath(path: $path, locale: "en") {
|
||||||
|
id
|
||||||
|
title
|
||||||
|
description
|
||||||
|
content
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
QUERY_FIND_PAGE = """
|
||||||
|
query ($path: String!) {
|
||||||
|
pages {
|
||||||
|
singleByPath(path: $path, locale: "en") {
|
||||||
|
id
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
MUTATION_UPDATE_PAGE = """
|
||||||
|
mutation ($id: Int!, $content: String!, $description: String!) {
|
||||||
|
pages {
|
||||||
|
update(id: $id, content: $content, description: $description, tags: ["blog"]) {
|
||||||
|
responseResult { succeeded message }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
MUTATION_CREATE_PAGE = """
|
||||||
|
mutation ($path: String!, $title: String!, $content: String!, $description: String!) {
|
||||||
|
pages {
|
||||||
|
create(
|
||||||
|
path: $path
|
||||||
|
title: $title
|
||||||
|
content: $content
|
||||||
|
editor: "markdown"
|
||||||
|
locale: "en"
|
||||||
|
isPublished: true
|
||||||
|
isPrivate: false
|
||||||
|
tags: ["blog"]
|
||||||
|
description: $description
|
||||||
|
) {
|
||||||
|
responseResult { succeeded message }
|
||||||
|
page { id }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# Helpers
|
# Helpers
|
||||||
@@ -64,30 +141,6 @@ def http_post(url: str, payload: dict, headers: dict) -> dict:
|
|||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
def wiki_graphql(base: str, token: str, query: str, variables: dict = None) -> dict:
|
|
||||||
url = f"{base}/graphql"
|
|
||||||
payload = {"query": query}
|
|
||||||
if variables:
|
|
||||||
payload["variables"] = variables
|
|
||||||
headers = {
|
|
||||||
"Authorization": f"Bearer {token}",
|
|
||||||
"Content-Type": "application/json",
|
|
||||||
}
|
|
||||||
return http_post(url, payload, headers)
|
|
||||||
|
|
||||||
|
|
||||||
def gemini_generate(api_key: str, prompt: str) -> str:
|
|
||||||
url = f"{GEMINI_BASE_URL}/{GEMINI_MODEL}:generateContent"
|
|
||||||
payload = {"contents": [{"parts": [{"text": prompt}]}]}
|
|
||||||
headers = {"Content-Type": "application/json", "X-goog-api-key": api_key}
|
|
||||||
resp = http_post(url, payload, headers)
|
|
||||||
try:
|
|
||||||
return resp["candidates"][0]["content"]["parts"][0]["text"]
|
|
||||||
except (KeyError, IndexError) as e:
|
|
||||||
print(f"ERROR: Unexpected Gemini response structure: {resp}", file=sys.stderr)
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
|
|
||||||
def to_kebab(text: str) -> str:
|
def to_kebab(text: str) -> str:
|
||||||
text = text.lower()
|
text = text.lower()
|
||||||
text = re.sub(r"[^a-z0-9\s-]", "", text)
|
text = re.sub(r"[^a-z0-9\s-]", "", text)
|
||||||
@@ -104,179 +157,177 @@ def read_file(path: str) -> str:
|
|||||||
|
|
||||||
|
|
||||||
def write_file(path: str, content: str) -> None:
|
def write_file(path: str, content: str) -> None:
|
||||||
|
os.makedirs(os.path.dirname(path), exist_ok=True)
|
||||||
with open(path, "w", encoding="utf-8") as f:
|
with open(path, "w", encoding="utf-8") as f:
|
||||||
f.write(content)
|
f.write(content)
|
||||||
print(f"✓ Saved to {path}")
|
print(f"✓ Saved to {path}")
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# Commands
|
# Classes
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
def cmd_fetch(args):
|
class WikiJS:
|
||||||
"""Download a Wiki.js page as Markdown via GraphQL."""
|
def __init__(self, base_domain: str, token: str):
|
||||||
base = require_env("WIKI_BASE_DOMAIN")
|
self.base_domain = base_domain.rstrip("/")
|
||||||
token = require_env("WIKI_TOKEN")
|
self.token = token
|
||||||
|
self.api_url = f"{self.base_domain}/graphql"
|
||||||
|
|
||||||
# Strip base domain from URL if full URL was given, then strip leading slash
|
def graphql(self, query: str, variables: dict = None) -> dict:
|
||||||
page_path = args.url.replace(base, "").lstrip("/")
|
payload = {"query": query}
|
||||||
print(f"→ Fetching wiki page: /{page_path}")
|
if variables:
|
||||||
|
payload["variables"] = variables
|
||||||
query = """
|
headers = {
|
||||||
query ($path: String!) {
|
"Authorization": f"Bearer {self.token}",
|
||||||
pages {
|
"Content-Type": "application/json",
|
||||||
singleByPath(path: $path, locale: "en") {
|
|
||||||
id
|
|
||||||
title
|
|
||||||
description
|
|
||||||
content
|
|
||||||
}
|
}
|
||||||
}
|
return http_post(self.api_url, payload, headers)
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
resp = wiki_graphql(base, token, query, {"path": page_path})
|
def get_page(self, path: str):
|
||||||
page = resp.get("data", {}).get("pages", {}).get("singleByPath")
|
resp = self.graphql(QUERY_GET_PAGE, {"path": path})
|
||||||
|
return resp.get("data", {}).get("pages", {}).get("singleByPath"), resp
|
||||||
|
|
||||||
if not page:
|
def find_page_id(self, path: str):
|
||||||
errors = resp.get("errors", resp)
|
resp = self.graphql(QUERY_FIND_PAGE, {"path": path})
|
||||||
print(f"ERROR: Page not found at '{page_path}': {errors}", file=sys.stderr)
|
page = resp.get("data", {}).get("pages", {}).get("singleByPath")
|
||||||
sys.exit(1)
|
return page.get("id") if page else None
|
||||||
|
|
||||||
write_file(SOURCE_FILE, page["content"])
|
def update_page(self, page_id: int, content: str, description: str):
|
||||||
|
variables = {"id": page_id, "content": content, "description": description}
|
||||||
|
resp = self.graphql(MUTATION_UPDATE_PAGE, variables)
|
||||||
|
return resp.get("data", {}).get("pages", {}).get("update", {}).get("responseResult", {}), resp
|
||||||
|
|
||||||
|
def create_page(self, path: str, title: str, content: str, description: str):
|
||||||
|
variables = {"path": path, "title": title, "content": content, "description": description}
|
||||||
|
resp = self.graphql(MUTATION_CREATE_PAGE, variables)
|
||||||
|
return resp.get("data", {}).get("pages", {}).get("create", {}).get("responseResult", {}), resp
|
||||||
|
|
||||||
|
|
||||||
def cmd_write(args):
|
class GoogleGemini:
|
||||||
"""Generate a blog post from SOURCE.md using Gemini."""
|
def __init__(self, api_key: str, model: str = GEMINI_MODEL):
|
||||||
api_key = require_env("GEMINI_API_KEY")
|
self.api_key = api_key
|
||||||
original_lang = require_env("ORIGINAL_LANG", "Hungarian")
|
self.model = model
|
||||||
|
self.url = f"{GEMINI_BASE_URL}/{self.model}:generateContent"
|
||||||
|
|
||||||
instructions = read_file(INSTRUCTIONS_FILE)
|
def generate(self, prompt: str) -> str:
|
||||||
source = read_file(SOURCE_FILE)
|
payload = {"contents": [{"parts": [{"text": prompt}]}]}
|
||||||
|
headers = {"Content-Type": "application/json", "X-goog-api-key": self.api_key}
|
||||||
print(f"→ Generating blog post in {original_lang} with Gemini...")
|
resp = http_post(self.url, payload, headers)
|
||||||
|
try:
|
||||||
prompt = (
|
return resp["candidates"][0]["content"]["parts"][0]["text"]
|
||||||
"Please read the following instructions carefully and follow them to write a blog post.\n\n"
|
except (KeyError, IndexError):
|
||||||
"## INSTRUCTIONS\n\n"
|
print(f"ERROR: Unexpected Gemini response structure: {resp}", file=sys.stderr)
|
||||||
f"{instructions}\n\n"
|
sys.exit(1)
|
||||||
"## TASK\n\n"
|
|
||||||
f"Read the source content below and write a blog post from it in {original_lang} language. "
|
|
||||||
"Output only the blog post in Markdown format, with no additional commentary.\n\n"
|
|
||||||
"## SOURCE CONTENT\n\n"
|
|
||||||
f"{source}"
|
|
||||||
)
|
|
||||||
|
|
||||||
result = gemini_generate(api_key, prompt)
|
|
||||||
write_file(BLOGPOST_FILE, result)
|
|
||||||
|
|
||||||
|
|
||||||
def cmd_translate(args):
|
class BlogWriter:
|
||||||
"""Translate BLOGPOST.md to TRANSLATED_BLOGPOST.md using Gemini."""
|
def __init__(self):
|
||||||
api_key = require_env("GEMINI_API_KEY")
|
self.wiki = WikiJS(
|
||||||
translate_lang = require_env("TRANSLATE_LANG", "English")
|
require_env("WIKI_BASE_DOMAIN"),
|
||||||
|
require_env("WIKI_TOKEN")
|
||||||
|
)
|
||||||
|
self.gemini = GoogleGemini(
|
||||||
|
require_env("GEMINI_API_KEY")
|
||||||
|
)
|
||||||
|
|
||||||
blogpost = read_file(BLOGPOST_FILE)
|
def fetch(self, url: str):
|
||||||
|
# Strip base domain from URL if full URL was given, then strip leading slash
|
||||||
|
page_path = url.replace(self.wiki.base_domain, "").lstrip("/")
|
||||||
|
print(f"→ Fetching wiki page: /{page_path}")
|
||||||
|
|
||||||
print(f"→ Translating blog post to {translate_lang} with Gemini...")
|
page, resp = self.wiki.get_page(page_path)
|
||||||
|
|
||||||
prompt = (
|
if not page:
|
||||||
f"Translate the following Markdown blog post into {translate_lang}. "
|
errors = resp.get("errors", resp)
|
||||||
"Preserve all Markdown formatting, headings, links, and code blocks exactly. "
|
print(f"ERROR: Page not found at '{page_path}': {errors}", file=sys.stderr)
|
||||||
"Output only the translated Markdown with no additional commentary.\n\n"
|
sys.exit(1)
|
||||||
f"{blogpost}"
|
|
||||||
)
|
|
||||||
|
|
||||||
result = gemini_generate(api_key, prompt)
|
write_file(SOURCE_FILE, page["content"])
|
||||||
write_file(TRANSLATED_FILE, result)
|
write_file(SOURCE_TITLE_FILE, page["title"])
|
||||||
|
|
||||||
|
def write(self):
|
||||||
|
original_lang = require_env("ORIGINAL_LANG", "Hungarian")
|
||||||
|
instructions = read_file(INSTRUCTIONS_FILE)
|
||||||
|
source = read_file(SOURCE_FILE)
|
||||||
|
|
||||||
def cmd_upload(args):
|
print(f"→ Generating blog post in {original_lang} with Gemini...")
|
||||||
"""Upload TRANSLATED_BLOGPOST.md to Wiki.js under /blog/{kebab-title}."""
|
|
||||||
base = require_env("WIKI_BASE_DOMAIN")
|
|
||||||
token = require_env("WIKI_TOKEN")
|
|
||||||
|
|
||||||
content = read_file(TRANSLATED_FILE)
|
prompt = WRITE_PROMPT_TEMPLATE.format(
|
||||||
|
instructions=instructions,
|
||||||
|
original_lang=original_lang,
|
||||||
|
source=source
|
||||||
|
)
|
||||||
|
|
||||||
# Extract H1 title
|
result = self.gemini.generate(prompt)
|
||||||
match = re.search(r"^#\s+(.+)", content, re.MULTILINE)
|
write_file(BLOGPOST_FILE, result)
|
||||||
if not match:
|
|
||||||
print(f"ERROR: No H1 heading found in {TRANSLATED_FILE}", file=sys.stderr)
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
title = match.group(1).strip()
|
def translate(self):
|
||||||
content = re.sub(r"^#\s+.+\n?", "", content, count=1, flags=re.MULTILINE).lstrip("\n")
|
translate_lang = require_env("TRANSLATE_LANG", "English")
|
||||||
kebab = to_kebab(title)
|
blogpost = read_file(BLOGPOST_FILE)
|
||||||
page_path = f"blog/{kebab}"
|
|
||||||
|
|
||||||
print(f"→ Uploading to Wiki.js")
|
print(f"→ Translating blog post to {translate_lang} with Gemini...")
|
||||||
print(f" Title : {title}")
|
|
||||||
print(f" Path : /{page_path}")
|
|
||||||
|
|
||||||
# Check if page already exists
|
prompt = TRANSLATE_PROMPT_TEMPLATE.format(
|
||||||
find_query = """
|
translate_lang=translate_lang,
|
||||||
query ($path: String!) {
|
blogpost=blogpost
|
||||||
pages {
|
)
|
||||||
singleByPath(path: $path, locale: "en") {
|
|
||||||
id
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
find_resp = wiki_graphql(base, token, find_query, {"path": page_path})
|
|
||||||
existing = find_resp.get("data", {}).get("pages", {}).get("singleByPath")
|
|
||||||
existing_id = existing.get("id") if existing else None
|
|
||||||
|
|
||||||
if existing_id:
|
result = self.gemini.generate(prompt)
|
||||||
print(f" Found existing page id={existing_id}, updating...")
|
write_file(TRANSLATED_FILE, result)
|
||||||
mutation = """
|
|
||||||
mutation ($id: Int!, $content: String!) {
|
|
||||||
pages {
|
|
||||||
update(id: $id, content: $content, tags: ["blog"]) {
|
|
||||||
responseResult { succeeded message }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
variables = {"id": existing_id, "content": content}
|
|
||||||
resp = wiki_graphql(base, token, mutation, variables)
|
|
||||||
result = resp.get("data", {}).get("pages", {}).get("update", {}).get("responseResult", {})
|
|
||||||
else:
|
|
||||||
print(" Page not found, creating new...")
|
|
||||||
mutation = """
|
|
||||||
mutation ($path: String!, $title: String!, $content: String!) {
|
|
||||||
pages {
|
|
||||||
create(
|
|
||||||
path: $path
|
|
||||||
title: $title
|
|
||||||
content: $content
|
|
||||||
editor: "markdown"
|
|
||||||
locale: "en"
|
|
||||||
isPublished: true
|
|
||||||
isPrivate: false
|
|
||||||
tags: ["blog"]
|
|
||||||
description: ""
|
|
||||||
) {
|
|
||||||
responseResult { succeeded message }
|
|
||||||
page { id }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
variables = {"path": page_path, "title": title, "content": content}
|
|
||||||
resp = wiki_graphql(base, token, mutation, variables)
|
|
||||||
result = resp.get("data", {}).get("pages", {}).get("create", {}).get("responseResult", {})
|
|
||||||
|
|
||||||
errors = resp.get("errors")
|
def upload(self):
|
||||||
if errors:
|
content = read_file(TRANSLATED_FILE)
|
||||||
print(f"ERROR: {json.dumps(errors, indent=2)}", file=sys.stderr)
|
description = read_file(SOURCE_TITLE_FILE).strip()
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
if not result.get("succeeded"):
|
# Extract H1 title
|
||||||
print(f"ERROR: Operation failed: {result.get('message')}", file=sys.stderr)
|
match = re.search(r"^#\s+(.+)", content, re.MULTILINE)
|
||||||
sys.exit(1)
|
if not match:
|
||||||
|
print(f"ERROR: No H1 heading found in {TRANSLATED_FILE}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
print(f"✓ Successfully uploaded to {base}/{page_path}")
|
title = match.group(1).strip()
|
||||||
|
content = re.sub(r"^#\s+.+\n?", "", content, count=1, flags=re.MULTILINE).lstrip("\n")
|
||||||
|
kebab = to_kebab(title)
|
||||||
|
page_path = f"blog/{kebab}"
|
||||||
|
|
||||||
|
print(f"→ Uploading to Wiki.js")
|
||||||
|
print(f" Title : {title}")
|
||||||
|
print(f" Path : /{page_path}")
|
||||||
|
print(f" Description: {description}")
|
||||||
|
|
||||||
|
existing_id = self.wiki.find_page_id(page_path)
|
||||||
|
|
||||||
|
if existing_id:
|
||||||
|
print(f" Found existing page id={existing_id}, updating...")
|
||||||
|
result, resp = self.wiki.update_page(existing_id, content, description)
|
||||||
|
else:
|
||||||
|
print(" Page not found, creating new...")
|
||||||
|
result, resp = self.wiki.create_page(page_path, title, content, description)
|
||||||
|
|
||||||
|
errors = resp.get("errors")
|
||||||
|
if errors:
|
||||||
|
print(f"ERROR: {json.dumps(errors, indent=2)}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
if not result.get("succeeded"):
|
||||||
|
print(f"ERROR: Operation failed: {result.get('message')}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
print(f"✓ Successfully uploaded to {self.wiki.base_domain}/{page_path}")
|
||||||
|
|
||||||
|
def clean(self):
|
||||||
|
"""Delete all .md files in the output directory."""
|
||||||
|
if not os.path.exists(OUTPUT_DIR):
|
||||||
|
print(f"→ Output directory '{OUTPUT_DIR}' does not exist. Nothing to clean.")
|
||||||
|
return
|
||||||
|
|
||||||
|
print(f"→ Cleaning {OUTPUT_DIR}/...")
|
||||||
|
count = 0
|
||||||
|
for filename in os.listdir(OUTPUT_DIR):
|
||||||
|
if filename.endswith(".md") or filename.endswith(".txt"):
|
||||||
|
os.remove(os.path.join(OUTPUT_DIR, filename))
|
||||||
|
count += 1
|
||||||
|
print(f"✓ Removed {count} Markdown files.")
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
@@ -294,23 +345,33 @@ def main():
|
|||||||
# fetch
|
# fetch
|
||||||
p_fetch = subparsers.add_parser("fetch", help="Download a Wiki.js page as Markdown")
|
p_fetch = subparsers.add_parser("fetch", help="Download a Wiki.js page as Markdown")
|
||||||
p_fetch.add_argument("url", help="Page path or full URL, e.g. /my-page or https://wiki.example.com/my-page")
|
p_fetch.add_argument("url", help="Page path or full URL, e.g. /my-page or https://wiki.example.com/my-page")
|
||||||
p_fetch.set_defaults(func=cmd_fetch)
|
|
||||||
|
|
||||||
# write
|
# write
|
||||||
p_write = subparsers.add_parser("write", help=f"Generate blog post from {SOURCE_FILE} using Gemini")
|
subparsers.add_parser("write", help=f"Generate blog post using Gemini")
|
||||||
p_write.set_defaults(func=cmd_write)
|
|
||||||
|
|
||||||
# translate
|
# translate
|
||||||
p_translate = subparsers.add_parser("translate", help=f"Translate {BLOGPOST_FILE} using Gemini")
|
subparsers.add_parser("translate", help=f"Translate generated blog post using Gemini")
|
||||||
p_translate.set_defaults(func=cmd_translate)
|
|
||||||
|
|
||||||
# upload
|
# upload
|
||||||
p_upload = subparsers.add_parser("upload", help=f"Upload {TRANSLATED_FILE} to Wiki.js")
|
subparsers.add_parser("upload", help=f"Upload translated blog post to Wiki.js")
|
||||||
p_upload.set_defaults(func=cmd_upload)
|
|
||||||
|
# clean
|
||||||
|
subparsers.add_parser("clean", help=f"Delete all .md files in the {OUTPUT_DIR} directory")
|
||||||
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
args.func(args)
|
writer = BlogWriter()
|
||||||
|
|
||||||
|
if args.command == "fetch":
|
||||||
|
writer.fetch(args.url)
|
||||||
|
elif args.command == "write":
|
||||||
|
writer.write()
|
||||||
|
elif args.command == "translate":
|
||||||
|
writer.translate()
|
||||||
|
elif args.command == "upload":
|
||||||
|
writer.upload()
|
||||||
|
elif args.command == "clean":
|
||||||
|
writer.clean()
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
main()
|
||||||
|
|||||||
0
output/.keep
Normal file
0
output/.keep
Normal file
Reference in New Issue
Block a user