Merge branch 'main' into maintenance/documentation-update

This commit is contained in:
mwisnowski 2025-10-02 16:30:16 -07:00 committed by GitHub
commit e95577a893
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
47 changed files with 3414 additions and 113 deletions

View file

@ -72,6 +72,7 @@ WEB_AUTO_REFRESH_DAYS=7 # dockerhub: WEB_AUTO_REFRESH_DAYS="7"
WEB_TAG_PARALLEL=1 # dockerhub: WEB_TAG_PARALLEL="1"
WEB_TAG_WORKERS=2 # dockerhub: WEB_TAG_WORKERS="4"
WEB_AUTO_ENFORCE=0 # dockerhub: WEB_AUTO_ENFORCE="0"
# DFC_COMPAT_SNAPSHOT=0 # 1=write legacy unmerged MDFC snapshots alongside merged catalogs (deprecated compatibility workflow)
# WEB_CUSTOM_EXPORT_BASE= # Custom basename for exports (optional).
# THEME_CATALOG_YAML_SCAN_INTERVAL_SEC=2.0 # Poll for YAML changes (dev)
# WEB_THEME_FILTER_PREWARM=0 # 1=prewarm common filters for faster first renders

View file

@ -13,14 +13,39 @@ This format follows Keep a Changelog principles and aims for Semantic Versioning
- Link PRs/issues inline when helpful, e.g., (#123) or [#123]. Reference-style links at the bottom are encouraged for readability.
## [Unreleased]
### Summary
- Wrapped the Multi-Faced Card Handling roadmap (tag merge, commander eligibility, land accounting) so double-faced cards now share tags, respect primary-face commander legality, and surface accurate land/MDFC diagnostics across web, CLI, and exports.
- Closed out MDFC follow-ups: deck summary now highlights double-faced lands with badges, per-face mana metadata flows through reporting, exports include annotations, and diagnostics can emit per-face snapshots for catalog QA.
- Surfaced commander exclusion warnings and automatic corrections in the builder so players are guided toward the legal front face whenever only a secondary face meets commander rules.
- Diagnostics dashboard now displays a multi-face merge snapshot plus live MDFC telemetry so catalog rebuilds and deck summaries can be verified in one place.
- Automated commander catalog refresh now ships with `python -m code.scripts.refresh_commander_catalog`, producing merged and compatibility snapshots alongside updated documentation for downstream consumers.
### Added
- Deck exporter regression coverage ensuring MDFC annotations (`DFCNote`) appear in CSV/TXT outputs, plus documentation for adding new double-faced cards to authoring workflows.
- Optional MDFC diagnostics snapshot toggled via `DFC_PER_FACE_SNAPSHOT` (with `DFC_PER_FACE_SNAPSHOT_PATH` override) to capture merged per-face metadata for observability.
- Structured observability for DFC merges: `multi_face_merger.py` now captures merge metrics and persists `logs/dfc_merge_summary.json` for troubleshooting.
- Land accounting coverage: `test_land_summary_totals.py` exercises MDFC totals, CLI output, and the deck summary HTMX fragment; shared fixtures added to `code/tests/conftest.py` for reuse.
- Tests: added `test_commander_primary_face_filter.py` to cover primary-face commander eligibility and secondary-face exclusions.
- Tests: added `test_commander_exclusion_warnings.py` to ensure commander exclusion guidance appears in the web builder and protects against regressions.
- Diagnostics: added a multi-face merge panel (with MDFC telemetry counters) to `/diagnostics`, powered by `summary_telemetry.py` and new land summary hooks.
- Commander browser skeleton page at `/commanders` with HTMX-capable filtering and catalog-backed commander rows.
- Shared color-identity macro and accessible theme chips powering the commander browser UI.
- Commander browser QA walkthrough documenting desktop and mobile validation steps (`docs/qa/commander_browser_walkthrough.md`).
- Home screen actions now surface Commander Browser and Diagnostics shortcuts when the corresponding feature flags are enabled.
- Manual QA pass (2025-09-30) recorded in project docs, covering desktop/mobile flows and edge cases.
- Commander wizard toggle to swap a matching basic land whenever modal double-faced lands are added, plus regression coverage in `test_mdfc_basic_swap.py`.
- Automation: `python -m code.scripts.refresh_commander_catalog` refreshes commander catalogs with MDFC-aware tagging, writing both merged output and `csv_files/compat_faces/commander_cards_unmerged.csv` for downstream validation; README and commander onboarding docs updated with migration guidance.
- Documentation: added `docs/qa/mdfc_staging_checklist.md` outlining MDFC staging QA (now updated for the always-on merge with optional compatibility snapshots).
### Changed
- Deck summary UI renders modal double-faced land badges and per-face face details so builders can audit mana contributions at-a-glance.
- MDFC merge flag removed: `ENABLE_DFC_MERGE` no longer gates the multi-face merge; the merge now runs unconditionally with optional `DFC_COMPAT_SNAPSHOT` compatibility snapshots.
- New Deck modal commander search now flags secondary-face-only entries, shows inline guidance, and auto-fills the eligible face before starting a build.
- New Deck modal Preferences block now surfaces "Use only owned", "Prefer owned", and "Swap basics for MDFC lands" checkboxes with session-backed defaults so the wizard mirrors Step 4 behavior.
- Deck summary now surfaces "Lands: X (Y with DFC)" with an MDFC breakdown panel, and CLI summaries mirror the same copy so web/CLI diagnostics stay in sync.
- Deck summary builder now records MDFC land telemetry for diagnostics snapshots, enabling quick verification of land contributions across builds.
- Roadmap documentation now summarizes remaining DFC follow-ups (observability, rollout gating, and exporter/UI enhancements) with next steps and ownership notes.
- Commander CSV enrichment now backfills `themeTags`, `creatureTypes`, and `roleTags` from the color-tagged catalogs so primary-face enforcement keeps merged tag coverage for multi-face commanders.
- Commander CSV generation now enforces primary-face legality, dropping secondary-face-only records, writing `.commander_exclusions.json` diagnostics, and surfacing actionable headless errors when configs reference removed commanders.
- Commander browser now paginates results in 20-commander pages with accessible navigation controls and range summaries to keep the catalog responsive.
- Commander hover preview collapses to a card-only view when browsing commanders, and all theme chips display without the previous “+ more” overflow badge.
- Added a Content Security Policy upgrade directive so proxied HTTPS deployments safely rewrite commander pagination requests to HTTPS, preventing mixed-content blocks.
@ -34,9 +59,11 @@ This format follows Keep a Changelog principles and aims for Semantic Versioning
- Commander list pagination controls now appear above and below the results and automatically scroll to the top when switching pages for quicker navigation.
- Mobile commander rows now feature larger thumbnails and a centered preview modal with expanded card art for improved readability.
- Preview performance CI check now waits for `/healthz` and retries theme catalog pagination fetches to dodge transient 500s during cold starts.
- Documentation now captures the MDFC staging plan: README and DOCKER guide highlight the always-on MDFC merge and the optional `DFC_COMPAT_SNAPSHOT=1` workflow for downstream QA.
### Fixed
- _No changes yet._
- Setup filtering now applies security-stamp exclusions case-insensitively so Acorn and Heart promo cards stay out of Commander-legal pools, with a regression test covering the behavior.
- Commander browser thumbnails now surface the double-faced flip control so MDFC commanders can swap faces directly from the catalog.
### Removed
- Preview performance GitHub Actions workflow (`.github/workflows/preview-perf-ci.yml`) retired after persistent cold-start failures; run the regression helper script manually as needed.

View file

@ -19,7 +19,62 @@ docker compose stop web
docker compose start web
```
- Prefer the public image? Pull and run it without building locally:
Then open http://localhost:8080
Volumes are the same as the CLI service, so deck exports/logs/configs persist in your working folder.
The app serves a favicon at `/favicon.ico` and exposes a health endpoint at `/healthz`.
Compare view offers a Copy summary button to copy a plain-text diff of two runs. The sidebar has a subtle depth shadow for clearer separation.
Web UI feature highlights:
- Locks: Click a card or the lock control in Step 5; locks persist across reruns.
- Replace: Enable Replace in Step 5, click a card to open Alternatives (filters include Owned-only), then choose a swap.
- Permalinks: Copy a permalink from Step 5 or a Finished deck; paste via “Open Permalink…” to restore.
- Compare: Use the Compare page from Finished Decks; quick actions include Latest two and Swap A/B.
Virtualized lists and lazy images (optin)
- Set `WEB_VIRTUALIZE=1` to enable virtualization in Step 5 grids/lists and the Owned library for smoother scrolling on large sets.
- Example (Compose):
```yaml
services:
web:
environment:
- WEB_VIRTUALIZE=1
```
- Example (Docker Hub):
```powershell
docker run --rm -p 8080:8080 `
-e WEB_VIRTUALIZE=1 `
-v "${PWD}/deck_files:/app/deck_files" `
-v "${PWD}/logs:/app/logs" `
-v "${PWD}/csv_files:/app/csv_files" `
-v "${PWD}/owned_cards:/app/owned_cards" `
-v "${PWD}/config:/app/config" `
-e SHOW_DIAGNOSTICS=1 ` # optional: enables diagnostics tools and overlay
mwisnowski/mtg-python-deckbuilder:latest `
bash -lc "cd /app && uvicorn code.web.app:app --host 0.0.0.0 --port 8080"
```
### Diagnostics and logs (optional)
Enable internal diagnostics and a read-only logs viewer with environment flags.
- `SHOW_DIAGNOSTICS=1` — adds a Diagnostics nav link and `/diagnostics` tools
- `SHOW_LOGS=1` — enables `/logs` and `/status/logs?tail=200`
Per-face MDFC snapshot (opt-in)
- `DFC_PER_FACE_SNAPSHOT=1` — write merged MDFC face metadata to `logs/dfc_per_face_snapshot.json`; disable parallel tagging (`WEB_TAG_PARALLEL=0`) if you need the snapshot during setup.
- `DFC_PER_FACE_SNAPSHOT_PATH=/app/logs/custom_snapshot.json` — optional path override for the snapshot artifact.
When enabled:
- `/logs` supports an auto-refresh toggle with interval, a level filter (All/Error/Warning/Info/Debug), and a Copy button to copy the visible tail.
- `/status/sys` returns a simple system summary (version, uptime, UTC server time, and feature flags) and is shown on the Diagnostics page when `SHOW_DIAGNOSTICS=1`.
- Virtualization overlay: press `v` on pages with virtualized grids to toggle per-grid overlays and a global summary bubble.
Compose example (web service):
```yaml
environment:
- SHOW_LOGS=1
- SHOW_DIAGNOSTICS=1
```
```powershell
docker run --rm -p 8080:8080 `
@ -31,7 +86,31 @@ docker run --rm -p 8080:8080 `
mwisnowski/mtg-python-deckbuilder:latest
```
Shared volumes persist builds, logs, configs, and owned cards on the host.
### MDFC merge rollout (staging)
The web service now runs the MDFC merge by default. Set `DFC_COMPAT_SNAPSHOT=1` on the web service when you need the legacy unmerged compatibility snapshot (`csv_files/compat_faces/`). Combine this with `python -m code.scripts.refresh_commander_catalog --compat-snapshot` inside the container to regenerate the commander files before smoke testing.
Follow the QA steps in `docs/qa/mdfc_staging_checklist.md` after toggling the flag.
Compose example:
```yaml
services:
web:
environment:
- DFC_COMPAT_SNAPSHOT=1
```
Verify the refresh inside the container:
```powershell
docker compose run --rm web bash -lc "python -m code.scripts.refresh_commander_catalog"
```
Downstream consumers can diff `csv_files/compat_faces/commander_cards_unmerged.csv` against historical exports during the staging window.
### Setup speed: parallel tagging (Web)
First-time setup or stale data triggers card tagging. The web service uses parallel workers by default.
## Run a JSON Config

BIN
README.md

Binary file not shown.

View file

@ -1,31 +1,32 @@
# MTG Python Deckbuilder ${VERSION}
## Summary
- Introduced the Commander Browser with HTMX-powered pagination, theme surfacing, and direct Create Deck integration.
- Shared color-identity macro and accessible theme chips power the new commander rows.
- Manual QA walkthrough (desktop + mobile) recorded on 20250930 with edge-case checks.
- Home dashboard aligns its quick actions with feature flags, exposing Commanders, Diagnostics, Random, Logs, and Setup where enabled.
- Completed the Multi-Faced Card Handling roadmap: multi-face records share merged tags, commander eligibility now checks only primary faces, and land diagnostics stay consistent across web, CLI, and exports.
- Deck summary highlights modal double-faced lands with inline badges, and exports append MDFC annotations so offline reviews match the web experience.
- Deck summary now surfaces MDFC land contributions with "Lands: X (Y with DFC)" copy and an expandable breakdown for modal double-faced cards.
- CLI deck output mirrors the web summary so diagnostics stay in sync across interfaces.
- Web builder commander search now flags secondary-face-only commanders, auto-corrects to the legal face, and shows inline guidance sourced from `.commander_exclusions.json`.
- Diagnostics dashboard now surfaces the multi-face merge snapshot and MDFC telemetry, combining the persisted `logs/dfc_merge_summary.json` artifact with live deck summary counters.
- New Deck modal now mirrors Step 4 preferences with inline toggles for owned-only, prefer-owned, and MDFC basic swap so players can lock in their plan before starting a build.
- Restored setup filtering to exclude Acorn and Heart promotional security stamps so Commander card pools stay format-legal.
- Added a dedicated commander catalog refresh helper (`python -m code.scripts.refresh_commander_catalog`) that outputs both merged MDFC-aware data and an unmerged compatibility snapshot, with updated documentation guiding downstream migrations.
- Documented the staging rollout completion: Docker/README guidance now notes the MDFC merge is always on and explains how to emit optional compatibility snapshots (`DFC_COMPAT_SNAPSHOT=1`) for downstream QA.
## Added
- Commander browser skeleton page at `/commanders` with catalog-backed rows and accessible theme chips.
- Documented QA checklist and results for the commander browser launch in `docs/qa/commander_browser_walkthrough.md`.
- Shared color-identity macro for reusable mana dots across commander rows and other templates.
- Home dashboard Commander/Diagnostics shortcuts gated by feature flags so all primary destinations have quick actions.
- Manual QA pass entered into project docs (2025-09-30) outlining desktop, mobile, and edge-case validations.
## Changed
- Commander list paginates in 20-item pages, with navigation controls mirrored above and below the results and automatic scroll-to-top.
- Commander hover preview shows card-only panel in browser context and removes the “+ more” overflow badge from theme chips.
- Content Security Policy upgrade directive ensures HTMX pagination requests remain HTTPS-safe behind proxies.
- Commander thumbnails adopt a fixed-width 160px frame (responsive on small screens) for consistent layout.
- Commander browser now separates name vs theme search, adds fuzzy theme suggestions, and tightens commander name matching to near-exact results.
- Commander search results stay put while filtering; typing no longer auto-scrolls the page away from the filter controls.
- Commander theme chips are larger, wrap cleanly, and display an accessible summary dialog when tapped on mobile.
- Theme dialogs now surface the full editorial description when available, improving longer summaries on small screens.
- Commander theme names unescape leading punctuation (e.g., +2/+2 Counters) so labels render without stray backslashes.
- Theme summary dialog also opens on desktop clicks, giving parity with mobile behavior.
- Mobile commander rows now feature larger thumbnails and a centered preview modal with expanded card art for improved readability.
- Preview performance CI check now waits for service health and retries catalog pagination fetches to smooth out transient 500s on cold boots.
- Regression test coverage for MDFC export annotations and documentation outlining how to add new double-faced cards to the CSV authoring workflow.
- Optional MDFC per-face diagnostics snapshot controlled through `DFC_PER_FACE_SNAPSHOT` (with `DFC_PER_FACE_SNAPSHOT_PATH` override) for catalog QA.
- Structured DFC merge logging captured in `logs/dfc_merge_summary.json` for observability.
- Land accounting regression coverage via `test_land_summary_totals.py`, including an HTMX smoke test for the deck summary partial.
- Roadmap updates capturing remaining DFC observability, rollout, and export follow-ups with next-step notes.
- Regression test `test_commander_exclusion_warnings.py` ensuring builder guidance for secondary-face commanders stays in place.
- Regression test covering security-stamp filtering during setup to guard against future case-sensitivity regressions.
- Diagnostics panel for multi-face merges, backed by the new `summary_telemetry.py` land summary hook, plus telemetry snapshot endpoint for MDFC land contributions.
- Commander wizard checkbox to swap matching basics whenever modal double-faced lands are added, with dedicated regression coverage.
- New Deck modal exposes owned-only, prefer-owned, and MDFC swap toggles with session-backed defaults so preferences stick across runs.
- Commander catalog automation script (`python -m code.scripts.refresh_commander_catalog`) regenerates commander data, always applies the MDFC merge, and can optionally write compat-face snapshots; README and commander docs now include post-guard migration guidance.
- Docker and README documentation now outline the always-on MDFC merge and the optional `DFC_COMPAT_SNAPSHOT=1` workflow plus compatibility snapshot checkpoints for downstream consumers.
- QA documentation: added `docs/qa/mdfc_staging_checklist.md` outlining the staging validation pass required before removing the MDFC compatibility guard.
## Fixed
- Documented friendly handling for missing `commander_cards.csv` data during manual QA drills to prevent white-screen failures.
- Setup filtering now applies security-stamp exclusions case-insensitively, preventing Acorn/Heart promo cards from entering Commander pools.
- Commander browser thumbnails restore the double-faced flip control so MDFC commanders expose both faces directly in the catalog.

View file

@ -0,0 +1,97 @@
from __future__ import annotations
import json
from functools import lru_cache
from pathlib import Path
from typing import Any, Dict, Optional
from settings import CSV_DIRECTORY
def _normalize(value: Any) -> str:
return str(value or "").strip().casefold()
def _exclusions_path() -> Path:
return Path(CSV_DIRECTORY) / ".commander_exclusions.json"
@lru_cache(maxsize=8)
def _load_index_cached(path_str: str, mtime: float) -> Dict[str, Dict[str, Any]]:
path = Path(path_str)
try:
with path.open("r", encoding="utf-8") as handle:
data = json.load(handle)
except Exception:
return {}
entries = data.get("secondary_face_only")
if not isinstance(entries, list):
return {}
index: Dict[str, Dict[str, Any]] = {}
for entry in entries:
if not isinstance(entry, dict):
continue
aliases = []
for key in (entry.get("name"), entry.get("primary_face")):
if key:
aliases.append(str(key))
faces = entry.get("faces")
if isinstance(faces, list):
aliases.extend(str(face) for face in faces if face)
eligible = entry.get("eligible_faces")
if isinstance(eligible, list):
aliases.extend(str(face) for face in eligible if face)
for alias in aliases:
norm = _normalize(alias)
if not norm:
continue
index[norm] = entry
return index
def _load_index() -> Dict[str, Dict[str, Any]]:
path = _exclusions_path()
if not path.is_file():
return {}
try:
stat = path.stat()
mtime = float(f"{stat.st_mtime:.6f}")
except Exception:
mtime = 0.0
return _load_index_cached(str(path.resolve()), mtime)
def lookup_commander(name: str) -> Optional[Dict[str, Any]]:
if not name:
return None
index = _load_index()
return index.get(_normalize(name))
def lookup_commander_detail(name: str) -> Optional[Dict[str, Any]]:
entry = lookup_commander(name)
if entry is None:
return None
data = dict(entry)
data.setdefault("primary_face", entry.get("primary_face") or entry.get("name"))
data.setdefault("eligible_faces", entry.get("eligible_faces") or [])
data.setdefault("reason", "secondary_face_only")
return data
def exclusions_summary() -> Dict[str, Any]:
index = _load_index()
return {
"count": len(index),
"entries": sorted(
[
{
"name": entry.get("name") or entry.get("primary_face") or key,
"primary_face": entry.get("primary_face") or entry.get("name") or key,
"eligible_faces": entry.get("eligible_faces") or [],
}
for key, entry in index.items()
],
key=lambda x: x["name"],
),
}

View file

@ -458,6 +458,8 @@ class DeckBuilder(
fetch_count: Optional[int] = None
# Whether this build is running in headless mode (suppress some interactive-only exports)
headless: bool = False
# Preference: swap a matching basic for modal double-faced lands when they are added
swap_mdfc_basics: bool = False
def __post_init__(self):
"""Post-init hook to wrap the provided output function so that all user-facing
@ -1766,6 +1768,9 @@ class DeckBuilder(
except Exception:
pass
# If configured, offset modal DFC land additions by trimming a matching basic
self._maybe_offset_basic_for_modal_land(card_name)
def _remove_from_pool(self, card_name: str):
if self._combined_cards_df is None:
return
@ -2275,6 +2280,59 @@ class DeckBuilder(
return bu.choose_basic_to_trim(self.card_library)
def _maybe_offset_basic_for_modal_land(self, card_name: str) -> None:
"""If enabled, remove one matching basic when a modal DFC land is added."""
if not getattr(self, 'swap_mdfc_basics', False):
return
try:
entry = self.card_library.get(card_name)
if entry and entry.get('Commander'):
return
# Force a fresh matrix so the newly added card is represented
self._color_source_cache_dirty = True
matrix = self._compute_color_source_matrix()
except Exception:
return
colors = matrix.get(card_name)
if not colors or not colors.get('_dfc_counts_as_extra'):
return
candidate_colors = [c for c in ['W', 'U', 'B', 'R', 'G', 'C'] if colors.get(c)]
if not candidate_colors:
return
matches: List[tuple[int, str, str]] = []
color_map = getattr(bc, 'COLOR_TO_BASIC_LAND', {})
snow_map = getattr(bc, 'SNOW_BASIC_LAND_MAPPING', {})
for color in candidate_colors:
names: List[str] = []
base = color_map.get(color)
if base:
names.append(base)
snow = snow_map.get(color)
if snow and snow not in names:
names.append(snow)
for nm in names:
entry = self.card_library.get(nm)
if entry and entry.get('Count', 0) > 0:
matches.append((int(entry.get('Count', 0)), nm, color))
break
if matches:
matches.sort(key=lambda x: x[0], reverse=True)
_, target_name, target_color = matches[0]
if self._decrement_card(target_name):
logger.info(
"MDFC swap: %s removed %s to keep land totals aligned",
card_name,
target_name,
)
return
fallback = self._choose_basic_to_trim()
if fallback and self._decrement_card(fallback):
logger.info(
"MDFC swap fallback: %s trimmed %s to maintain land total",
card_name,
fallback,
)
def _decrement_card(self, name: str) -> bool:
entry = self.card_library.get(name)
if not entry:

View file

@ -16,7 +16,11 @@ MAX_FUZZY_CHOICES: Final[int] = 5 # Maximum number of fuzzy match choices
DUPLICATE_CARD_FORMAT: Final[str] = '{card_name} x {count}'
COMMANDER_CSV_PATH: Final[str] = f"{csv_dir()}/commander_cards.csv"
DECK_DIRECTORY = '../deck_files'
COMMANDER_CONVERTERS: Final[Dict[str, str]] = {'themeTags': ast.literal_eval, 'creatureTypes': ast.literal_eval} # CSV loading converters
COMMANDER_CONVERTERS: Final[Dict[str, str]] = {
'themeTags': ast.literal_eval,
'creatureTypes': ast.literal_eval,
'roleTags': ast.literal_eval,
} # CSV loading converters
COMMANDER_POWER_DEFAULT: Final[int] = 0
COMMANDER_TOUGHNESS_DEFAULT: Final[int] = 0
COMMANDER_MANA_VALUE_DEFAULT: Final[int] = 0

View file

@ -8,15 +8,159 @@ Only import lightweight standard library modules here to avoid import cycles.
"""
from __future__ import annotations
from typing import Dict, Iterable
from typing import Any, Dict, Iterable, List
import re
import ast
import random as _rand
from functools import lru_cache
from pathlib import Path
import pandas as pd
from . import builder_constants as bc
import math
from path_util import csv_dir
COLOR_LETTERS = ['W', 'U', 'B', 'R', 'G']
_MULTI_FACE_LAYOUTS = {
"adventure",
"aftermath",
"augment",
"flip",
"host",
"meld",
"modal_dfc",
"reversible_card",
"split",
"transform",
}
_SIDE_PRIORITY = {
"": 0,
"a": 0,
"front": 0,
"main": 0,
"b": 1,
"back": 1,
"c": 2,
}
def _detect_produces_mana(text: str) -> bool:
text = (text or "").lower()
if not text:
return False
if 'add one mana of any color' in text or 'add one mana of any colour' in text:
return True
if 'add mana of any color' in text or 'add mana of any colour' in text:
return True
if 'mana of any one color' in text or 'any color of mana' in text:
return True
if 'add' in text:
for sym in ('{w}', '{u}', '{b}', '{r}', '{g}', '{c}'):
if sym in text:
return True
return False
def _resolved_csv_dir(base_dir: str | None = None) -> str:
try:
if base_dir:
return str(Path(base_dir).resolve())
return str(Path(csv_dir()).resolve())
except Exception:
return base_dir or csv_dir()
@lru_cache(maxsize=None)
def _load_multi_face_land_map(base_dir: str) -> Dict[str, Dict[str, Any]]:
"""Load mapping of multi-faced cards that have at least one land face."""
try:
base_path = Path(base_dir)
csv_path = base_path / 'cards.csv'
if not csv_path.exists():
return {}
usecols = ['name', 'layout', 'side', 'type', 'text', 'manaCost', 'manaValue', 'faceName']
df = pd.read_csv(csv_path, usecols=usecols, low_memory=False)
except Exception:
return {}
if df.empty or 'layout' not in df.columns or 'type' not in df.columns:
return {}
df['layout'] = df['layout'].fillna('').astype(str).str.lower()
multi_df = df[df['layout'].isin(_MULTI_FACE_LAYOUTS)].copy()
if multi_df.empty:
return {}
multi_df['type'] = multi_df['type'].fillna('').astype(str)
multi_df['side'] = multi_df['side'].fillna('').astype(str)
multi_df['text'] = multi_df['text'].fillna('').astype(str)
land_rows = multi_df[multi_df['type'].str.contains('land', case=False, na=False)]
if land_rows.empty:
return {}
mapping: Dict[str, Dict[str, Any]] = {}
for name, group in land_rows.groupby('name', sort=False):
faces: List[Dict[str, str]] = []
seen: set[tuple[str, str, str]] = set()
front_is_land = False
layout_val = ''
for _, row in group.iterrows():
side_raw = str(row.get('side', '') or '').strip()
side_key = side_raw.lower()
if not side_key:
side_key = 'a'
type_val = str(row.get('type', '') or '')
text_val = str(row.get('text', '') or '')
mana_cost_val = str(row.get('manaCost', '') or '')
mana_value_raw = row.get('manaValue', '')
mana_value_val = None
try:
if mana_value_raw not in (None, ''):
mana_value_val = float(mana_value_raw)
if math.isnan(mana_value_val):
mana_value_val = None
except Exception:
mana_value_val = None
face_label = str(row.get('faceName', '') or row.get('name', '') or '')
produces_mana = _detect_produces_mana(text_val)
signature = (side_key, type_val, text_val)
if signature in seen:
continue
seen.add(signature)
faces.append({
'face': face_label,
'side': side_key,
'type': type_val,
'text': text_val,
'mana_cost': mana_cost_val,
'mana_value': mana_value_val,
'produces_mana': produces_mana,
'is_land': 'land' in type_val.lower(),
'layout': str(row.get('layout', '') or ''),
})
if side_key in ('', 'a', 'front', 'main'):
front_is_land = True
layout_val = layout_val or str(row.get('layout', '') or '')
if not faces:
continue
faces.sort(key=lambda face: _SIDE_PRIORITY.get(face.get('side', ''), 3))
mapping[name] = {
'faces': faces,
'front_is_land': front_is_land,
'layout': layout_val,
}
return mapping
def multi_face_land_info(name: str, base_dir: str | None = None) -> Dict[str, Any]:
return _load_multi_face_land_map(_resolved_csv_dir(base_dir)).get(name, {})
def get_multi_face_land_faces(name: str, base_dir: str | None = None) -> List[Dict[str, str]]:
entry = multi_face_land_info(name, base_dir)
return list(entry.get('faces', []))
def has_multi_face_land(name: str, base_dir: str | None = None) -> bool:
entry = multi_face_land_info(name, base_dir)
return bool(entry and entry.get('faces'))
def parse_theme_tags(val) -> list[str]:
@ -90,13 +234,49 @@ def compute_color_source_matrix(card_library: Dict[str, dict], full_df) -> Dict[
nm = str(r.get('name', ''))
if nm and nm not in lookup:
lookup[nm] = r
try:
dfc_map = _load_multi_face_land_map(_resolved_csv_dir())
except Exception:
dfc_map = {}
for name, entry in card_library.items():
row = lookup.get(name, {})
entry_type = str(entry.get('Card Type') or entry.get('Type') or '').lower()
tline_full = str(row.get('type', row.get('type_line', '')) or '').lower()
entry_type_raw = str(entry.get('Card Type') or entry.get('Type') or '')
entry_type = entry_type_raw.lower()
row_type_raw = ''
if hasattr(row, 'get'):
row_type_raw = row.get('type', row.get('type_line', '')) or ''
tline_full = str(row_type_raw).lower()
# Land or permanent that could produce mana via text
is_land = ('land' in entry_type) or ('land' in tline_full)
text_field = str(row.get('text', row.get('oracleText', '')) or '').lower()
base_is_land = is_land
text_field_raw = ''
if hasattr(row, 'get'):
text_field_raw = row.get('text', row.get('oracleText', '')) or ''
if pd.isna(text_field_raw):
text_field_raw = ''
text_field_raw = str(text_field_raw)
dfc_entry = dfc_map.get(name)
if dfc_entry:
faces = dfc_entry.get('faces', []) or []
if faces:
face_types: List[str] = []
face_texts: List[str] = []
for face in faces:
type_val = str(face.get('type', '') or '')
text_val = str(face.get('text', '') or '')
if type_val:
face_types.append(type_val)
if text_val:
face_texts.append(text_val)
if face_types:
joined_types = ' '.join(face_types)
tline_full = (tline_full + ' ' + joined_types.lower()).strip()
if face_texts:
joined_text = ' '.join(face_texts)
text_field_raw = (text_field_raw + ' ' + joined_text).strip()
if face_types or face_texts:
is_land = True
text_field = text_field_raw.lower().replace('\n', ' ')
# Skip obvious non-permanents (rituals etc.)
if (not is_land) and ('instant' in entry_type or 'sorcery' in entry_type or 'instant' in tline_full or 'sorcery' in tline_full):
continue
@ -166,8 +346,13 @@ def compute_color_source_matrix(card_library: Dict[str, dict], full_df) -> Dict[
col = mapping.get(base)
if col:
colors[col] = 1
# Only include cards that produced at least one color
if any(colors.values()):
dfc_is_land = bool(dfc_entry and dfc_entry.get('faces'))
if dfc_is_land:
colors['_dfc_land'] = True
if not (base_is_land or dfc_entry.get('front_is_land')):
colors['_dfc_counts_as_extra'] = True
produces_any_color = any(colors[c] for c in ('W', 'U', 'B', 'R', 'G', 'C'))
if produces_any_color or colors.get('_dfc_land'):
matrix[name] = colors
return matrix
@ -210,11 +395,15 @@ def compute_spell_pip_weights(card_library: Dict[str, dict], color_identity: Ite
return {c: (pip_counts[c] / total_colored) for c in pip_counts}
__all__ = [
'compute_color_source_matrix',
'compute_spell_pip_weights',
'parse_theme_tags',
'normalize_theme_list',
'multi_face_land_info',
'get_multi_face_land_faces',
'has_multi_face_land',
'detect_viable_multi_copy_archetypes',
'prefer_owned_first',
'compute_adjusted_target',

View file

@ -1,12 +1,15 @@
from __future__ import annotations
from typing import Dict, List
from typing import Any, Dict, List
import csv
import os
import datetime as _dt
import re as _re
import logging_util
from code.deck_builder.summary_telemetry import record_land_summary
from code.deck_builder.shared_copy import build_land_headline, dfc_card_note
logger = logging_util.logging.getLogger(__name__)
try:
@ -285,6 +288,36 @@ class ReportingMixin:
pct = (c / total_cards * 100) if total_cards else 0.0
self.output_func(f" {cat:<15} {c:>3} ({pct:5.1f}%)")
# Surface land vs. MDFC counts for CLI users to mirror web summary copy
try:
summary = self.build_deck_summary() # type: ignore[attr-defined]
except Exception:
summary = None
if isinstance(summary, dict):
land_summary = summary.get('land_summary') or {}
if isinstance(land_summary, dict) and land_summary:
traditional = int(land_summary.get('traditional', 0))
dfc_bonus = int(land_summary.get('dfc_lands', 0))
with_dfc = int(land_summary.get('with_dfc', traditional + dfc_bonus))
headline = land_summary.get('headline')
if not headline:
headline = build_land_headline(traditional, dfc_bonus, with_dfc)
self.output_func(f" {headline}")
dfc_cards = land_summary.get('dfc_cards') or []
if isinstance(dfc_cards, list) and dfc_cards:
self.output_func(" MDFC sources:")
for entry in dfc_cards:
try:
name = str(entry.get('name', ''))
count = int(entry.get('count', 1))
except Exception:
name, count = str(entry.get('name', '')), 1
colors = entry.get('colors') or []
colors_txt = ', '.join(colors) if colors else '-'
adds_extra = bool(entry.get('adds_extra_land') or entry.get('counts_as_extra'))
note = entry.get('note') or dfc_card_note(adds_extra)
self.output_func(f" - {name} ×{count} ({colors_txt}) — {note}")
# ---------------------------
# Structured deck summary for UI (types, pips, sources, curve)
# ---------------------------
@ -347,6 +380,41 @@ class ReportingMixin:
return 'Land'
return 'Other'
builder_utils_module = None
try:
from deck_builder import builder_utils as _builder_utils # type: ignore
builder_utils_module = _builder_utils
color_matrix = builder_utils_module.compute_color_source_matrix(self.card_library, full_df)
except Exception:
color_matrix = {}
dfc_land_lookup: Dict[str, Dict[str, Any]] = {}
if color_matrix:
for name, flags in color_matrix.items():
if not bool(flags.get('_dfc_land')):
continue
counts_as_extra = bool(flags.get('_dfc_counts_as_extra'))
note_text = dfc_card_note(counts_as_extra)
card_colors = [color for color in ('W', 'U', 'B', 'R', 'G', 'C') if flags.get(color)]
faces_meta: list[Dict[str, Any]] = []
layout_val = None
if builder_utils_module is not None:
try:
mf_info = builder_utils_module.multi_face_land_info(name)
except Exception:
mf_info = {}
faces_meta = list(mf_info.get('faces', [])) if isinstance(mf_info, dict) else []
layout_val = mf_info.get('layout') if isinstance(mf_info, dict) else None
dfc_land_lookup[name] = {
'adds_extra_land': counts_as_extra,
'counts_as_land': not counts_as_extra,
'note': note_text,
'colors': card_colors,
'faces': faces_meta,
'layout': layout_val,
}
else:
color_matrix = {}
# Type breakdown (counts and per-type card lists)
type_counts: Dict[str, int] = {}
type_cards: Dict[str, list] = {}
@ -364,17 +432,31 @@ class ReportingMixin:
category = classify(base_type, name)
type_counts[category] = type_counts.get(category, 0) + cnt
total_cards += cnt
type_cards.setdefault(category, []).append({
card_entry = {
'name': name,
'count': cnt,
'role': info.get('Role', '') or '',
'tags': list(info.get('Tags', []) or []),
})
}
dfc_meta = dfc_land_lookup.get(name)
if dfc_meta:
card_entry['dfc'] = True
card_entry['dfc_land'] = True
card_entry['dfc_adds_extra_land'] = bool(dfc_meta.get('adds_extra_land'))
card_entry['dfc_counts_as_land'] = bool(dfc_meta.get('counts_as_land'))
card_entry['dfc_note'] = dfc_meta.get('note', '')
card_entry['dfc_colors'] = list(dfc_meta.get('colors', []))
card_entry['dfc_faces'] = list(dfc_meta.get('faces', []))
type_cards.setdefault(category, []).append(card_entry)
# Sort cards within each type by name
for cat, lst in type_cards.items():
lst.sort(key=lambda x: (x['name'].lower(), -int(x['count'])))
type_order = sorted(type_counts.keys(), key=lambda k: precedence_index.get(k, 999))
# Track multi-face land contributions for later summary display
dfc_details: list[dict] = []
dfc_extra_total = 0
# Pip distribution (counts and weights) for non-land spells only
pip_counts = {c: 0 for c in ('W','U','B','R','G')}
# For UI cross-highlighting: map color -> list of cards that have that color pip in their cost
@ -425,21 +507,52 @@ class ReportingMixin:
pip_weights = {c: (pip_counts[c] / total_pips if total_pips else 0.0) for c in pip_counts}
# Mana generation from lands (color sources)
try:
from deck_builder import builder_utils as _bu
matrix = _bu.compute_color_source_matrix(self.card_library, full_df)
except Exception:
matrix = {}
matrix = color_matrix
source_counts = {c: 0 for c in ('W','U','B','R','G','C')}
# For UI cross-highlighting: color -> list of cards that produce that color (typically lands, possibly others)
source_cards: Dict[str, list] = {c: [] for c in ('W','U','B','R','G','C')}
for name, flags in matrix.items():
copies = int(self.card_library.get(name, {}).get('Count', 1))
is_dfc_land = bool(flags.get('_dfc_land'))
counts_as_extra = bool(flags.get('_dfc_counts_as_extra'))
dfc_meta = dfc_land_lookup.get(name)
for c in source_counts.keys():
if int(flags.get(c, 0)):
source_counts[c] += copies
source_cards[c].append({'name': name, 'count': copies})
entry = {'name': name, 'count': copies, 'dfc': is_dfc_land}
if dfc_meta:
entry['dfc_note'] = dfc_meta.get('note', '')
entry['dfc_adds_extra_land'] = bool(dfc_meta.get('adds_extra_land'))
source_cards[c].append(entry)
if is_dfc_land:
card_colors = list(dfc_meta.get('colors', [])) if dfc_meta else [color for color in ('W','U','B','R','G','C') if flags.get(color)]
note_text = dfc_meta.get('note') if dfc_meta else dfc_card_note(counts_as_extra)
adds_extra = bool(dfc_meta.get('adds_extra_land')) if dfc_meta else counts_as_extra
counts_as_land = bool(dfc_meta.get('counts_as_land')) if dfc_meta else not counts_as_extra
faces_meta = list(dfc_meta.get('faces', [])) if dfc_meta else []
layout_val = dfc_meta.get('layout') if dfc_meta else None
dfc_details.append({
'name': name,
'count': copies,
'colors': card_colors,
'counts_as_land': counts_as_land,
'adds_extra_land': adds_extra,
'counts_as_extra': adds_extra,
'note': note_text,
'faces': faces_meta,
'layout': layout_val,
})
if adds_extra:
dfc_extra_total += copies
total_sources = sum(source_counts.values())
traditional_lands = type_counts.get('Land', 0)
land_summary = {
'traditional': traditional_lands,
'dfc_lands': dfc_extra_total,
'with_dfc': traditional_lands + dfc_extra_total,
'dfc_cards': dfc_details,
'headline': build_land_headline(traditional_lands, dfc_extra_total, traditional_lands + dfc_extra_total),
}
# Mana curve (non-land spells)
curve_bins = ['0','1','2','3','4','5','6+']
@ -484,7 +597,7 @@ class ReportingMixin:
'duplicates_collapsed': diagnostics.get('duplicates_collapsed', {}),
}
return {
summary_payload = {
'type_breakdown': {
'counts': type_counts,
'order': type_order,
@ -506,9 +619,15 @@ class ReportingMixin:
'total_spells': total_spells,
'cards': curve_cards,
},
'land_summary': land_summary,
'colors': list(getattr(self, 'color_identity', []) or []),
'include_exclude_summary': include_exclude_summary,
}
try:
record_land_summary(land_summary)
except Exception: # pragma: no cover - diagnostics only
logger.debug("Failed to record MDFC telemetry", exc_info=True)
return summary_payload
def export_decklist_csv(self, directory: str = 'deck_files', filename: str | None = None, suppress_output: bool = False) -> str:
"""Export current decklist to CSV (enriched).
Filename pattern (default): commanderFirstWord_firstTheme_YYYYMMDD.csv
@ -574,9 +693,26 @@ class ReportingMixin:
if nm not in row_lookup:
row_lookup[nm] = r
builder_utils_module = None
try:
from deck_builder import builder_utils as builder_utils_module # type: ignore
color_matrix = builder_utils_module.compute_color_source_matrix(self.card_library, full_df)
except Exception:
color_matrix = {}
dfc_land_lookup: Dict[str, Dict[str, Any]] = {}
for card_name, flags in color_matrix.items():
if not bool(flags.get('_dfc_land')):
continue
counts_as_extra = bool(flags.get('_dfc_counts_as_extra'))
note_text = dfc_card_note(counts_as_extra)
dfc_land_lookup[card_name] = {
'note': note_text,
'adds_extra_land': counts_as_extra,
}
headers = [
"Name","Count","Type","ManaCost","ManaValue","Colors","Power","Toughness",
"Role","SubRole","AddedBy","TriggerTag","Synergy","Tags","Text","Owned"
"Role","SubRole","AddedBy","TriggerTag","Synergy","Tags","Text","DFCNote","Owned"
]
# Precedence list for sorting
@ -680,6 +816,12 @@ class ReportingMixin:
prec = precedence_index.get(cat, 999)
# Alphabetical within category (no mana value sorting)
owned_flag = 'Y' if (name.lower() in owned_set_lower) else ''
dfc_meta = dfc_land_lookup.get(name)
dfc_note = ''
if dfc_meta:
note_text = dfc_meta.get('note')
if note_text:
dfc_note = f"MDFC: {note_text}"
rows.append(((prec, name.lower()), [
name,
info.get('Count', 1),
@ -696,6 +838,7 @@ class ReportingMixin:
info.get('Synergy') if info.get('Synergy') is not None else '',
tags_join,
text_field[:800] if isinstance(text_field, str) else str(text_field)[:800],
dfc_note,
owned_flag
]))
@ -804,6 +947,18 @@ class ReportingMixin:
if nm not in row_lookup:
row_lookup[nm] = r
try:
from deck_builder import builder_utils as _builder_utils # type: ignore
color_matrix = _builder_utils.compute_color_source_matrix(self.card_library, full_df)
except Exception:
color_matrix = {}
dfc_land_lookup: Dict[str, str] = {}
for card_name, flags in color_matrix.items():
if not bool(flags.get('_dfc_land')):
continue
counts_as_extra = bool(flags.get('_dfc_counts_as_extra'))
dfc_land_lookup[card_name] = dfc_card_note(counts_as_extra)
sortable: List[tuple] = []
for name, info in self.card_library.items():
base_type = info.get('Card Type') or info.get('Type','')
@ -814,12 +969,16 @@ class ReportingMixin:
base_type = row_type
cat = classify(base_type, name)
prec = precedence_index.get(cat, 999)
sortable.append(((prec, name.lower()), name, info.get('Count',1)))
dfc_note = dfc_land_lookup.get(name)
sortable.append(((prec, name.lower()), name, info.get('Count',1), dfc_note))
sortable.sort(key=lambda x: x[0])
with open(path, 'w', encoding='utf-8') as f:
for _, name, count in sortable:
f.write(f"{count} {name}\n")
for _, name, count, dfc_note in sortable:
line = f"{count} {name}"
if dfc_note:
line += f" [MDFC: {dfc_note}]"
f.write(line + "\n")
if not suppress_output:
self.output_func(f"Plaintext deck list exported to {path}")
return path

View file

@ -0,0 +1,30 @@
"""Shared text helpers to keep CLI and web copy in sync."""
from __future__ import annotations
from typing import Optional
__all__ = ["build_land_headline", "dfc_card_note"]
def build_land_headline(traditional: int, dfc_bonus: int, with_dfc: Optional[int] = None) -> str:
"""Return the consistent land summary headline.
Args:
traditional: Count of traditional land slots.
dfc_bonus: Number of MDFC lands counted as additional slots.
with_dfc: Optional total including MDFC lands. If omitted, the sum of
``traditional`` and ``dfc_bonus`` is used.
"""
base = max(int(traditional), 0)
bonus = max(int(dfc_bonus), 0)
total = int(with_dfc) if with_dfc is not None else base + bonus
headline = f"Lands: {base}"
if bonus:
headline += f" ({total} with DFC)"
return headline
def dfc_card_note(counts_as_extra: bool) -> str:
"""Return the descriptive note for an MDFC land entry."""
return "Adds extra land slot" if counts_as_extra else "Counts as land slot"

View file

@ -0,0 +1,122 @@
from __future__ import annotations
import threading
import time
from collections import Counter
from typing import Any, Dict, Iterable
__all__ = [
"record_land_summary",
"get_mdfc_metrics",
]
_lock = threading.Lock()
_metrics: Dict[str, Any] = {
"total_builds": 0,
"builds_with_mdfc": 0,
"total_mdfc_lands": 0,
"last_updated": None,
"last_updated_iso": None,
"last_summary": None,
}
_top_cards: Counter[str] = Counter()
def _to_int(value: Any) -> int:
try:
if value is None:
return 0
if isinstance(value, bool):
return int(value)
return int(float(value))
except (TypeError, ValueError):
return 0
def _sanitize_cards(cards: Iterable[Dict[str, Any]] | None) -> list[Dict[str, Any]]:
if not cards:
return []
sanitized: list[Dict[str, Any]] = []
for entry in cards:
if not isinstance(entry, dict):
continue
name = str(entry.get("name") or "").strip()
if not name:
continue
count = _to_int(entry.get("count", 1)) or 1
colors = entry.get("colors")
if isinstance(colors, (list, tuple)):
color_list = [str(c) for c in colors if str(c)]
else:
color_list = []
sanitized.append(
{
"name": name,
"count": count,
"colors": color_list,
"counts_as_land": bool(entry.get("counts_as_land")),
"adds_extra_land": bool(entry.get("adds_extra_land")),
}
)
return sanitized
def record_land_summary(land_summary: Dict[str, Any] | None) -> None:
if not isinstance(land_summary, dict):
return
dfc_lands = _to_int(land_summary.get("dfc_lands"))
with_dfc = _to_int(land_summary.get("with_dfc"))
timestamp = time.time()
cards = _sanitize_cards(land_summary.get("dfc_cards"))
with _lock:
_metrics["total_builds"] = int(_metrics.get("total_builds", 0)) + 1
if dfc_lands > 0:
_metrics["builds_with_mdfc"] = int(_metrics.get("builds_with_mdfc", 0)) + 1
_metrics["total_mdfc_lands"] = int(_metrics.get("total_mdfc_lands", 0)) + dfc_lands
for entry in cards:
_top_cards[entry["name"]] += entry["count"]
_metrics["last_summary"] = {
"dfc_lands": dfc_lands,
"with_dfc": with_dfc,
"cards": cards,
}
_metrics["last_updated"] = timestamp
_metrics["last_updated_iso"] = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime(timestamp))
def get_mdfc_metrics() -> Dict[str, Any]:
with _lock:
builds = int(_metrics.get("total_builds", 0) or 0)
builds_with = int(_metrics.get("builds_with_mdfc", 0) or 0)
total_lands = int(_metrics.get("total_mdfc_lands", 0) or 0)
ratio = (builds_with / builds) if builds else 0.0
avg_lands = (total_lands / builds_with) if builds_with else 0.0
top_cards = dict(_top_cards.most_common(10))
return {
"total_builds": builds,
"builds_with_mdfc": builds_with,
"build_share": ratio,
"total_mdfc_lands": total_lands,
"avg_mdfc_lands": avg_lands,
"top_cards": top_cards,
"last_summary": _metrics.get("last_summary"),
"last_updated": _metrics.get("last_updated_iso"),
}
def _reset_metrics_for_test() -> None:
with _lock:
_metrics.update(
{
"total_builds": 0,
"builds_with_mdfc": 0,
"total_mdfc_lands": 0,
"last_updated": None,
"last_updated_iso": None,
"last_summary": None,
}
)
_top_cards.clear()

View file

@ -39,6 +39,7 @@ from .setup_utils import (
process_legendary_cards,
check_csv_exists,
save_color_filtered_csvs,
enrich_commander_rows_with_tags,
)
from exceptions import (
CSVFileNotFoundError,
@ -136,6 +137,9 @@ def determine_commanders() -> None:
logger.info('Applying standard card filters')
filtered_df = filter_dataframe(filtered_df, BANNED_CARDS)
logger.info('Enriching commander metadata with theme and creature tags')
filtered_df = enrich_commander_rows_with_tags(filtered_df, CSV_DIRECTORY)
# Save commander cards
logger.info('Saving validated commander cards')
filtered_df.to_csv(f'{CSV_DIRECTORY}/commander_cards.csv', index=False)

View file

@ -17,13 +17,16 @@ The module integrates with settings.py for configuration and exceptions.py for e
from __future__ import annotations
# Standard library imports
import ast
import requests
from pathlib import Path
from typing import List, Optional, Union, TypedDict
from typing import List, Optional, Union, TypedDict, Iterable, Dict, Any
# Third-party imports
import pandas as pd
from tqdm import tqdm
import json
from datetime import datetime
# Local application imports
from .setup_constants import (
@ -45,7 +48,7 @@ from exceptions import (
CommanderValidationError
)
from type_definitions import CardLibraryDF
from settings import FILL_NA_COLUMNS
from settings import FILL_NA_COLUMNS, CSV_DIRECTORY
import logging_util
# Create logger for this module
@ -54,6 +57,251 @@ logger.setLevel(logging_util.LOG_LEVEL)
logger.addHandler(logging_util.file_handler)
logger.addHandler(logging_util.stream_handler)
def _is_primary_side(value: object) -> bool:
"""Return True when the provided side marker corresponds to a primary face."""
try:
if pd.isna(value):
return True
except Exception:
pass
text = str(value).strip().lower()
return text in {"", "a"}
def _summarize_secondary_face_exclusions(
names: Iterable[str],
source_df: pd.DataFrame,
) -> List[Dict[str, Any]]:
summaries: List[Dict[str, Any]] = []
if not names:
return summaries
for raw_name in names:
name = str(raw_name)
group = source_df[source_df['name'] == name]
if group.empty:
continue
primary_rows = group[group['side'].apply(_is_primary_side)] if 'side' in group.columns else pd.DataFrame()
primary_face = (
str(primary_rows['faceName'].iloc[0])
if not primary_rows.empty and 'faceName' in primary_rows.columns
else ""
)
layout = str(group['layout'].iloc[0]) if 'layout' in group.columns and not group.empty else ""
faces = sorted(set(str(v) for v in group.get('faceName', pd.Series(dtype=str)).dropna().tolist()))
eligible_faces = sorted(
set(
str(v)
for v in group
.loc[~group['side'].apply(_is_primary_side) if 'side' in group.columns else [False] * len(group)]
.get('faceName', pd.Series(dtype=str))
.dropna()
.tolist()
)
)
summaries.append(
{
"name": name,
"primary_face": primary_face or name.split('//')[0].strip(),
"layout": layout,
"faces": faces,
"eligible_faces": eligible_faces,
"reason": "secondary_face_only",
}
)
return summaries
def _write_commander_exclusions_log(entries: List[Dict[str, Any]]) -> None:
"""Persist commander exclusion diagnostics for downstream tooling."""
path = Path(CSV_DIRECTORY) / ".commander_exclusions.json"
if not entries:
try:
path.unlink()
except FileNotFoundError:
return
except Exception as exc:
logger.debug("Unable to remove commander exclusion log: %s", exc)
return
payload = {
"generated_at": datetime.now().isoformat(timespec='seconds'),
"secondary_face_only": entries,
}
try:
path.parent.mkdir(parents=True, exist_ok=True)
with path.open('w', encoding='utf-8') as handle:
json.dump(payload, handle, indent=2, ensure_ascii=False)
except Exception as exc:
logger.warning("Failed to write commander exclusion diagnostics: %s", exc)
def _enforce_primary_face_commander_rules(
candidate_df: pd.DataFrame,
source_df: pd.DataFrame,
) -> pd.DataFrame:
"""Retain only primary faces and record any secondary-face-only exclusions."""
if candidate_df.empty or 'side' not in candidate_df.columns:
_write_commander_exclusions_log([])
return candidate_df
mask_primary = candidate_df['side'].apply(_is_primary_side)
primary_df = candidate_df[mask_primary].copy()
secondary_df = candidate_df[~mask_primary]
primary_names = set(str(n) for n in primary_df.get('name', pd.Series(dtype=str)))
secondary_only_names = sorted(
set(str(n) for n in secondary_df.get('name', pd.Series(dtype=str))) - primary_names
)
if secondary_only_names:
logger.info(
"Excluding %d commander entries where only a secondary face is eligible: %s",
len(secondary_only_names),
", ".join(secondary_only_names),
)
entries = _summarize_secondary_face_exclusions(secondary_only_names, source_df)
_write_commander_exclusions_log(entries)
return primary_df
def _coerce_tag_list(value: object) -> List[str]:
"""Normalize various list-like representations into a list of strings."""
if value is None:
return []
if isinstance(value, float) and pd.isna(value):
return []
if isinstance(value, (list, tuple, set)):
return [str(v).strip() for v in value if str(v).strip()]
text = str(value).strip()
if not text:
return []
try:
parsed = ast.literal_eval(text)
if isinstance(parsed, (list, tuple, set)):
return [str(v).strip() for v in parsed if str(v).strip()]
except Exception:
pass
parts = [part.strip() for part in text.replace(";", ",").split(",")]
return [part for part in parts if part]
def _collect_commander_tag_metadata(csv_dir: Union[str, Path]) -> Dict[str, Dict[str, List[str]]]:
"""Aggregate theme and creature tags from color-tagged CSV files."""
path = Path(csv_dir)
if not path.exists():
return {}
combined: Dict[str, Dict[str, set[str]]] = {}
columns = ("themeTags", "creatureTypes", "roleTags")
for color in SETUP_COLORS:
color_path = path / f"{color}_cards.csv"
if not color_path.exists():
continue
try:
df = pd.read_csv(color_path, low_memory=False)
except Exception as exc:
logger.debug("Unable to read %s for commander tag enrichment: %s", color_path, exc)
continue
if df.empty or ("name" not in df.columns and "faceName" not in df.columns):
continue
for _, row in df.iterrows():
face_key = str(row.get("faceName", "")).strip()
name_key = str(row.get("name", "")).strip()
keys = {k for k in (face_key, name_key) if k}
if not keys:
continue
for key in keys:
bucket = combined.setdefault(key, {col: set() for col in columns})
for col in columns:
if col not in row:
continue
values = _coerce_tag_list(row.get(col))
if values:
bucket[col].update(values)
enriched: Dict[str, Dict[str, List[str]]] = {}
for key, data in combined.items():
enriched[key] = {col: sorted(values) for col, values in data.items() if values}
return enriched
def enrich_commander_rows_with_tags(
df: pd.DataFrame,
csv_dir: Union[str, Path],
) -> pd.DataFrame:
"""Attach theme and creature tag metadata to commander rows when available."""
if df.empty:
df = df.copy()
for column in ("themeTags", "creatureTypes", "roleTags"):
if column not in df.columns:
df[column] = []
return df
metadata = _collect_commander_tag_metadata(csv_dir)
if not metadata:
df = df.copy()
for column in ("themeTags", "creatureTypes", "roleTags"):
if column not in df.columns:
df[column] = [[] for _ in range(len(df))]
return df
df = df.copy()
for column in ("themeTags", "creatureTypes", "roleTags"):
if column not in df.columns:
df[column] = [[] for _ in range(len(df))]
theme_values: List[List[str]] = []
creature_values: List[List[str]] = []
role_values: List[List[str]] = []
for _, row in df.iterrows():
face_key = str(row.get("faceName", "")).strip()
name_key = str(row.get("name", "")).strip()
entry_face = metadata.get(face_key, {})
entry_name = metadata.get(name_key, {})
combined: Dict[str, set[str]] = {
"themeTags": set(_coerce_tag_list(row.get("themeTags"))),
"creatureTypes": set(_coerce_tag_list(row.get("creatureTypes"))),
"roleTags": set(_coerce_tag_list(row.get("roleTags"))),
}
for source in (entry_face, entry_name):
for column in combined:
combined[column].update(source.get(column, []))
theme_values.append(sorted(combined["themeTags"]))
creature_values.append(sorted(combined["creatureTypes"]))
role_values.append(sorted(combined["roleTags"]))
df["themeTags"] = theme_values
df["creatureTypes"] = creature_values
df["roleTags"] = role_values
enriched_rows = sum(1 for t, c, r in zip(theme_values, creature_values, role_values) if t or c or r)
logger.debug("Enriched %d commander rows with tag metadata", enriched_rows)
return df
# Type definitions
class FilterRule(TypedDict):
"""Type definition for filter rules configuration."""
@ -194,13 +442,36 @@ def filter_dataframe(df: pd.DataFrame, banned_cards: List[str]) -> pd.DataFrame:
filtered_df = df.copy()
filter_config: FilterConfig = FILTER_CONFIG # Type hint for configuration
for field, rules in filter_config.items():
if field not in filtered_df.columns:
logger.warning('Skipping filter for missing field %s', field)
continue
for rule_type, values in rules.items():
if not values:
continue
if rule_type == 'exclude':
for value in values:
filtered_df = filtered_df[~filtered_df[field].str.contains(value, na=False)]
mask = filtered_df[field].astype(str).str.contains(
value,
case=False,
na=False,
regex=False
)
filtered_df = filtered_df[~mask]
elif rule_type == 'require':
for value in values:
filtered_df = filtered_df[filtered_df[field].str.contains(value, na=False)]
mask = filtered_df[field].astype(str).str.contains(
value,
case=False,
na=False,
regex=False
)
filtered_df = filtered_df[mask]
else:
logger.warning('Unknown filter rule type %s for field %s', rule_type, field)
continue
logger.debug(f'Applied {rule_type} filter for {field}: {values}')
# Remove illegal sets
@ -406,7 +677,9 @@ def process_legendary_cards(df: pd.DataFrame) -> pd.DataFrame:
"set_legality",
str(e)
) from e
logger.info(f'Commander validation complete. {len(filtered_df)} valid commanders found')
filtered_df = _enforce_primary_face_commander_rules(filtered_df, df)
logger.info('Commander validation complete. %d valid commanders found', len(filtered_df))
return filtered_df
except CommanderValidationError:

View file

@ -10,6 +10,8 @@ from deck_builder.builder import DeckBuilder
from deck_builder import builder_constants as bc
from file_setup.setup import initial_setup
from tagging import tagger
from exceptions import CommanderValidationError
from commander_exclusions import lookup_commander_detail
def _is_stale(file1: str, file2: str) -> bool:
"""Return True if file2 is missing or older than file1."""
@ -67,6 +69,58 @@ def _headless_list_owned_files() -> List[str]:
return sorted(entries)
def _normalize_commander_name(value: Any) -> str:
return str(value or "").strip().casefold()
def _load_commander_name_lookup() -> set[str]:
builder = DeckBuilder(
headless=True,
log_outputs=False,
output_func=lambda *_: None,
input_func=lambda *_: "",
)
df = builder.load_commander_data()
names: set[str] = set()
for column in ("name", "faceName"):
if column not in df.columns:
continue
series = df[column].dropna().astype(str)
for raw in series:
normalized = _normalize_commander_name(raw)
if normalized:
names.add(normalized)
return names
def _validate_commander_available(command_name: str) -> None:
normalized = _normalize_commander_name(command_name)
if not normalized:
return
available = _load_commander_name_lookup()
if normalized in available:
return
info = lookup_commander_detail(command_name)
if info is not None:
primary_face = str(info.get("primary_face") or info.get("name") or "").strip()
eligible_faces = info.get("eligible_faces")
face_hint = ", ".join(str(face) for face in eligible_faces) if isinstance(eligible_faces, list) else ""
message = (
f"Commander '{command_name}' is no longer available because only a secondary face met commander eligibility."
)
if primary_face and _normalize_commander_name(primary_face) != normalized:
message += f" Try selecting the front face '{primary_face}' or choose a different commander."
elif face_hint:
message += f" The remaining eligible faces were: {face_hint}."
else:
message += " Choose a different commander whose front face is commander-legal."
raise CommanderValidationError(message, details={"commander": command_name, "reason": info})
raise CommanderValidationError(f"Commander not found: {command_name}", details={"commander": command_name})
@dataclass
class RandomRunConfig:
"""Runtime options for the headless random build flow."""
@ -113,6 +167,11 @@ def run(
seed: Optional[int | str] = None,
) -> DeckBuilder:
"""Run a scripted non-interactive deck build and return the DeckBuilder instance."""
trimmed_commander = (command_name or "").strip()
if trimmed_commander:
_validate_commander_available(trimmed_commander)
command_name = trimmed_commander
owned_prompt_inputs: List[str] = []
owned_files_available = _headless_list_owned_files()
if owned_files_available:
@ -1460,7 +1519,11 @@ def _main() -> int:
print("Error: commander is required. Provide --commander or a JSON config with a 'commander' field.")
return 2
run(**resolved)
try:
run(**resolved)
except CommanderValidationError as exc:
print(str(exc))
return 2
return 0

View file

@ -0,0 +1,305 @@
"""Catalog diff helper for verifying multi-face merge output.
This utility regenerates the card CSV catalog (optionally writing compatibility
snapshots) and then compares the merged outputs against the baseline snapshots.
It is intended to support the MDFC rollout checklist by providing a concise summary
of how many rows were merged, which cards collapsed into a single record, and
whether any tag unions diverge from expectations.
Example usage (from repo root, inside virtualenv):
python -m code.scripts.preview_dfc_catalog_diff --compat-snapshot --output logs/dfc_catalog_diff.json
The script prints a human readable summary to stdout and optionally writes a JSON
artifact for release/staging review.
"""
from __future__ import annotations
import argparse
import ast
import importlib
import json
import os
import sys
import time
from collections import Counter
from pathlib import Path
from typing import Any, Dict, Iterable, List, Sequence
import pandas as pd
from settings import COLORS, CSV_DIRECTORY
DEFAULT_COMPAT_DIR = Path(os.getenv("DFC_COMPAT_DIR", "csv_files/compat_faces"))
CSV_ROOT = Path(CSV_DIRECTORY)
def _parse_list_cell(value: Any) -> List[str]:
"""Convert serialized list cells ("['A', 'B']") into Python lists."""
if isinstance(value, list):
return [str(item) for item in value]
if value is None:
return []
if isinstance(value, float) and pd.isna(value): # type: ignore[arg-type]
return []
text = str(value).strip()
if not text:
return []
try:
parsed = ast.literal_eval(text)
except (SyntaxError, ValueError):
return [text]
if isinstance(parsed, list):
return [str(item) for item in parsed]
return [str(parsed)]
def _load_catalog(path: Path) -> pd.DataFrame:
if not path.exists():
raise FileNotFoundError(f"Catalog file missing: {path}")
df = pd.read_csv(path)
for column in ("themeTags", "keywords", "creatureTypes"):
if column in df.columns:
df[column] = df[column].apply(_parse_list_cell)
return df
def _multi_face_names(df: pd.DataFrame) -> List[str]:
counts = Counter(df.get("name", []))
return [name for name, count in counts.items() if isinstance(name, str) and count > 1]
def _collect_tags(series: Iterable[List[str]]) -> List[str]:
tags: List[str] = []
for value in series:
if isinstance(value, list):
tags.extend(str(item) for item in value)
return sorted(set(tags))
def _summarize_color(
color: str,
merged: pd.DataFrame,
baseline: pd.DataFrame,
sample_size: int,
) -> Dict[str, Any]:
merged_names = set(merged.get("name", []))
baseline_names = list(baseline.get("name", []))
baseline_name_set = set(name for name in baseline_names if isinstance(name, str))
multi_face = _multi_face_names(baseline)
collapsed = []
tag_mismatches: List[str] = []
missing_after_merge: List[str] = []
for name in multi_face:
group = baseline[baseline["name"] == name]
merged_row = merged[merged["name"] == name]
if merged_row.empty:
missing_after_merge.append(name)
continue
expected_tags = _collect_tags(group["themeTags"]) if "themeTags" in group else []
merged_tags = _collect_tags(merged_row.iloc[[0]]["themeTags"]) if "themeTags" in merged_row else []
if expected_tags != merged_tags:
tag_mismatches.append(name)
collapsed.append(name)
removed_names = sorted(baseline_name_set - merged_names)
added_names = sorted(merged_names - baseline_name_set)
return {
"rows_merged": len(merged),
"rows_baseline": len(baseline),
"row_delta": len(merged) - len(baseline),
"multi_face_groups": len(multi_face),
"collapsed_sample": collapsed[:sample_size],
"tag_union_mismatches": tag_mismatches[:sample_size],
"missing_after_merge": missing_after_merge[:sample_size],
"removed_names": removed_names[:sample_size],
"added_names": added_names[:sample_size],
}
def _refresh_catalog(colors: Sequence[str], compat_snapshot: bool) -> None:
os.environ.pop("ENABLE_DFC_MERGE", None)
os.environ["DFC_COMPAT_SNAPSHOT"] = "1" if compat_snapshot else "0"
importlib.invalidate_caches()
# Reload tagger to pick up the new env var
tagger = importlib.import_module("code.tagging.tagger")
tagger = importlib.reload(tagger) # type: ignore[assignment]
for color in colors:
tagger.load_dataframe(color)
def generate_diff(
colors: Sequence[str],
compat_dir: Path,
sample_size: int,
) -> Dict[str, Any]:
per_color: Dict[str, Any] = {}
overall = {
"total_rows_merged": 0,
"total_rows_baseline": 0,
"total_multi_face_groups": 0,
"colors": len(colors),
"tag_union_mismatches": 0,
"missing_after_merge": 0,
}
for color in colors:
merged_path = CSV_ROOT / f"{color}_cards.csv"
baseline_path = compat_dir / f"{color}_cards_unmerged.csv"
merged_df = _load_catalog(merged_path)
baseline_df = _load_catalog(baseline_path)
summary = _summarize_color(color, merged_df, baseline_df, sample_size)
per_color[color] = summary
overall["total_rows_merged"] += summary["rows_merged"]
overall["total_rows_baseline"] += summary["rows_baseline"]
overall["total_multi_face_groups"] += summary["multi_face_groups"]
overall["tag_union_mismatches"] += len(summary["tag_union_mismatches"])
overall["missing_after_merge"] += len(summary["missing_after_merge"])
overall["row_delta_total"] = overall["total_rows_merged"] - overall["total_rows_baseline"]
return {"overall": overall, "per_color": per_color}
def main(argv: List[str]) -> int:
parser = argparse.ArgumentParser(description="Preview merged vs baseline DFC catalog diff")
parser.add_argument(
"--skip-refresh",
action="store_true",
help="Skip rebuilding the catalog in compatibility mode (requires existing compat snapshots)",
)
parser.add_argument(
"--mode",
default="",
help="[Deprecated] Legacy ENABLE_DFC_MERGE value (compat|1|0 etc.)",
)
parser.add_argument(
"--compat-snapshot",
dest="compat_snapshot",
action="store_true",
help="Write compatibility snapshots before diffing (default: off unless legacy --mode compat)",
)
parser.add_argument(
"--no-compat-snapshot",
dest="compat_snapshot",
action="store_false",
help="Skip compatibility snapshots even if legacy --mode compat is supplied",
)
parser.set_defaults(compat_snapshot=None)
parser.add_argument(
"--colors",
nargs="*",
help="Optional subset of colors to diff (defaults to full COLORS list)",
)
parser.add_argument(
"--compat-dir",
type=Path,
default=DEFAULT_COMPAT_DIR,
help="Directory containing unmerged compatibility snapshots (default: %(default)s)",
)
parser.add_argument(
"--output",
type=Path,
help="Optional JSON file to write with the diff summary",
)
parser.add_argument(
"--sample-size",
type=int,
default=10,
help="Number of sample entries to include per section (default: %(default)s)",
)
args = parser.parse_args(argv)
colors = tuple(args.colors) if args.colors else tuple(COLORS)
compat_dir = args.compat_dir
mode = str(args.mode or "").strip().lower()
if mode and mode not in {"compat", "dual", "both", "1", "on", "true", "0", "off", "false", "disabled"}:
print(
f" Legacy --mode value '{mode}' detected; merge remains enabled. Use --compat-snapshot as needed.",
flush=True,
)
if args.compat_snapshot is None:
compat_snapshot = mode in {"compat", "dual", "both"}
else:
compat_snapshot = args.compat_snapshot
if mode:
print(
" Ignoring deprecated --mode value because --compat-snapshot/--no-compat-snapshot was supplied.",
flush=True,
)
if mode in {"0", "off", "false", "disabled"}:
print(
"⚠ ENABLE_DFC_MERGE=off is deprecated; the merge remains enabled regardless of the value.",
flush=True,
)
if not args.skip_refresh:
start = time.perf_counter()
_refresh_catalog(colors, compat_snapshot)
duration = time.perf_counter() - start
snapshot_msg = "with compat snapshot" if compat_snapshot else "merged-only"
print(f"✔ Refreshed catalog in {duration:.1f}s ({snapshot_msg})")
else:
print(" Using existing catalog outputs (refresh skipped)")
try:
diff = generate_diff(colors, compat_dir, args.sample_size)
except FileNotFoundError as exc:
print(f"ERROR: {exc}")
print("Run without --skip-refresh (or ensure compat snapshots exist).", file=sys.stderr)
return 2
overall = diff["overall"]
print("\n=== DFC Catalog Diff Summary ===")
print(
f"Merged rows: {overall['total_rows_merged']:,} | Baseline rows: {overall['total_rows_baseline']:,} | "
f"Δ rows: {overall['row_delta_total']:,}"
)
print(
f"Multi-face groups: {overall['total_multi_face_groups']:,} | "
f"Tag union mismatches: {overall['tag_union_mismatches']} | Missing after merge: {overall['missing_after_merge']}"
)
for color, summary in diff["per_color"].items():
print(f"\n[{color}] baseline={summary['rows_baseline']} merged={summary['rows_merged']} Δ={summary['row_delta']}")
if summary["multi_face_groups"]:
print(f" multi-face groups: {summary['multi_face_groups']}")
if summary["collapsed_sample"]:
sample = ", ".join(summary["collapsed_sample"][:3])
print(f" collapsed sample: {sample}")
if summary["tag_union_mismatches"]:
print(f" TAG MISMATCH sample: {', '.join(summary['tag_union_mismatches'])}")
if summary["missing_after_merge"]:
print(f" MISSING sample: {', '.join(summary['missing_after_merge'])}")
if summary["removed_names"]:
print(f" removed sample: {', '.join(summary['removed_names'])}")
if summary["added_names"]:
print(f" added sample: {', '.join(summary['added_names'])}")
if args.output:
payload = {
"captured_at": int(time.time()),
"mode": args.mode,
"colors": colors,
"compat_dir": str(compat_dir),
"summary": diff,
}
try:
args.output.parent.mkdir(parents=True, exist_ok=True)
args.output.write_text(json.dumps(payload, indent=2, sort_keys=True), encoding="utf-8")
print(f"\n📄 Wrote JSON summary to {args.output}")
except Exception as exc: # pragma: no cover
print(f"Failed to write output file {args.output}: {exc}", file=sys.stderr)
return 3
return 0
if __name__ == "__main__": # pragma: no cover
raise SystemExit(main(sys.argv[1:]))

View file

@ -0,0 +1,126 @@
"""Regenerate commander catalog with MDFC merge applied.
This helper refreshes `commander_cards.csv` using the latest setup pipeline and
then runs the tagging/merge step so downstream consumers pick up the unified
multi-face rows. The merge is now always enabled; use the optional
`--compat-snapshot` flag to emit an unmerged compatibility snapshot alongside
the merged catalog for downstream validation.
Examples (run from repo root after activating the virtualenv):
python -m code.scripts.refresh_commander_catalog
python -m code.scripts.refresh_commander_catalog --compat-snapshot --skip-setup
The legacy `--mode` argument is retained for backwards compatibility but no
longer disables the merge. `--mode compat` is treated the same as
`--compat-snapshot`, while `--mode off` now issues a warning and still runs the
merge.
"""
from __future__ import annotations
import argparse
import importlib
import os
import sys
from pathlib import Path
from settings import CSV_DIRECTORY
DEFAULT_COMPAT_SNAPSHOT = False
SUPPORTED_COLORS = ("commander",)
def _refresh_setup() -> None:
setup_mod = importlib.import_module("code.file_setup.setup")
setup_mod.determine_commanders()
def _refresh_tags() -> None:
tagger = importlib.import_module("code.tagging.tagger")
tagger = importlib.reload(tagger) # type: ignore[assignment]
for color in SUPPORTED_COLORS:
tagger.load_dataframe(color)
def _summarize_outputs(compat_snapshot: bool) -> str:
merged = Path(CSV_DIRECTORY) / "commander_cards.csv"
compat_dir = Path(os.getenv("DFC_COMPAT_DIR", "csv_files/compat_faces"))
parts = ["✔ Commander catalog refreshed (multi-face merge always on)"]
parts.append(f" merged file: {merged.resolve()}")
if compat_snapshot:
compat_path = compat_dir / "commander_cards_unmerged.csv"
parts.append(f" compat snapshot: {compat_path.resolve()}")
return "\n".join(parts)
def _resolve_compat_snapshot(mode: str, cli_override: bool | None) -> bool:
"""Determine whether to write the compatibility snapshot."""
if cli_override is not None:
return cli_override
normalized = str(mode or "").strip().lower()
if normalized in {"", "1", "true", "on"}:
return False
if normalized in {"compat", "dual", "both"}:
return True
if normalized in {"0", "false", "off", "disabled"}:
print(
"⚠ ENABLE_DFC_MERGE=off is deprecated; the merge remains enabled and no compatibility snapshot is written by default.",
flush=True,
)
return False
if normalized:
print(
f" Legacy --mode value '{normalized}' detected. Multi-face merge is always enabled; pass --compat-snapshot to write the unmerged CSV.",
flush=True,
)
return DEFAULT_COMPAT_SNAPSHOT
def main(argv: list[str]) -> int:
parser = argparse.ArgumentParser(description="Refresh commander catalog with MDFC merge")
parser.add_argument(
"--mode",
default="",
help="[Deprecated] Legacy ENABLE_DFC_MERGE value (compat|1|0 etc.).",
)
parser.add_argument(
"--skip-setup",
action="store_true",
help="Skip the setup.determine_commanders() step if commander_cards.csv is already up to date.",
)
parser.add_argument(
"--compat-snapshot",
dest="compat_snapshot",
action="store_true",
help="Write compatibility snapshots to csv_files/compat_faces/commander_cards_unmerged.csv",
)
parser.add_argument(
"--no-compat-snapshot",
dest="compat_snapshot",
action="store_false",
help="Skip writing compatibility snapshots (default).",
)
parser.set_defaults(compat_snapshot=None)
args = parser.parse_args(argv)
compat_snapshot = _resolve_compat_snapshot(str(args.mode or ""), args.compat_snapshot)
os.environ.pop("ENABLE_DFC_MERGE", None)
os.environ["DFC_COMPAT_SNAPSHOT"] = "1" if compat_snapshot else "0"
importlib.invalidate_caches()
if not args.skip_setup:
_refresh_setup()
_refresh_tags()
print(_summarize_outputs(compat_snapshot))
return 0
if __name__ == "__main__": # pragma: no cover
raise SystemExit(main(sys.argv[1:]))

View file

@ -0,0 +1,304 @@
"""Utilities for merging multi-faced card entries after tagging.
This module groups card DataFrame rows that represent multiple faces of the same
card (transform, split, adventure, modal DFC, etc.) and collapses them into a
single canonical record with merged tags.
"""
from __future__ import annotations
import ast
import json
import math
from datetime import UTC, datetime
from pathlib import Path
from typing import Any, Callable, Dict, Iterable, List, Sequence, Set
import pandas as pd
# Layouts that indicate a card has multiple faces represented as separate rows.
_MULTI_FACE_LAYOUTS: Set[str] = {
"adventure",
"aftermath",
"augment",
"flip",
"host",
"meld",
"modal_dfc",
"reversible_card",
"split",
"transform",
}
_SIDE_PRIORITY = {
"": 0,
"a": 0,
"front": 0,
"main": 0,
"b": 1,
"back": 1,
"c": 2,
}
_LIST_UNION_COLUMNS: Sequence[str] = ("themeTags", "creatureTypes", "roleTags")
_SUMMARY_PATH = Path("logs/dfc_merge_summary.json")
def _text_produces_mana(text: Any) -> bool:
text_str = str(text or "").lower()
if not text_str:
return False
if "add one mana of any color" in text_str or "add one mana of any colour" in text_str:
return True
if "add mana of any color" in text_str or "add mana of any colour" in text_str:
return True
if "mana of any one color" in text_str or "any color of mana" in text_str:
return True
if "add" in text_str:
for sym in ("{w}", "{u}", "{b}", "{r}", "{g}", "{c}"):
if sym in text_str:
return True
return False
def load_merge_summary() -> Dict[str, Any]:
try:
with _SUMMARY_PATH.open("r", encoding="utf-8") as handle:
data = json.load(handle)
if isinstance(data, dict):
return data
except Exception:
pass
return {"updated_at": None, "colors": {}}
def merge_multi_face_rows(
df: pd.DataFrame,
color: str,
logger=None,
recorder: Callable[[Dict[str, Any]], None] | None = None,
) -> pd.DataFrame:
"""Merge multi-face card rows into canonical entries with combined tags.
Args:
df: DataFrame containing tagged card data for a specific color.
color: Color name, used for logging context.
logger: Optional logger instance. When provided, debug information is emitted.
Returns:
DataFrame with multi-face entries collapsed and combined tag data.
"""
if df.empty or "layout" not in df.columns or "name" not in df.columns:
return df
work_df = df.copy()
layout_series = work_df["layout"].fillna("").astype(str).str.lower()
multi_mask = layout_series.isin(_MULTI_FACE_LAYOUTS)
if not multi_mask.any():
return work_df
drop_indices: List[int] = []
merged_count = 0
merge_details: List[Dict[str, Any]] = []
for name, group in work_df.loc[multi_mask].groupby("name", sort=False):
if len(group) <= 1:
continue
group_sorted = _sort_faces(group)
primary_idx = group_sorted.index[0]
faces_payload: List[Dict[str, Any]] = []
for column in _LIST_UNION_COLUMNS:
if column in group_sorted.columns:
union_values = _merge_object_lists(group_sorted[column])
work_df.at[primary_idx, column] = union_values
if "keywords" in group_sorted.columns:
keyword_union = _merge_keywords(group_sorted["keywords"])
work_df.at[primary_idx, "keywords"] = _join_keywords(keyword_union)
for _, face_row in group_sorted.iterrows():
text_val = face_row.get("text") or face_row.get("oracleText") or ""
mana_cost_val = face_row.get("manaCost", face_row.get("mana_cost", "")) or ""
mana_value_raw = face_row.get("manaValue", face_row.get("mana_value", ""))
try:
if mana_value_raw in (None, ""):
mana_value_val = None
else:
mana_value_val = float(mana_value_raw)
if math.isnan(mana_value_val):
mana_value_val = None
except Exception:
mana_value_val = None
type_val = face_row.get("type", "") or ""
faces_payload.append(
{
"face": str(face_row.get("faceName") or face_row.get("name") or ""),
"side": str(face_row.get("side") or ""),
"layout": str(face_row.get("layout") or ""),
"themeTags": _merge_object_lists([face_row.get("themeTags", [])]),
"roleTags": _merge_object_lists([face_row.get("roleTags", [])]),
"type": str(type_val),
"text": str(text_val),
"mana_cost": str(mana_cost_val),
"mana_value": mana_value_val,
"produces_mana": _text_produces_mana(text_val),
"is_land": 'land' in str(type_val).lower(),
}
)
for idx in group_sorted.index[1:]:
drop_indices.append(idx)
merged_count += 1
layout_set = sorted({f.get("layout", "") for f in faces_payload if f.get("layout")})
removed_faces = faces_payload[1:] if len(faces_payload) > 1 else []
merge_details.append(
{
"name": name,
"total_faces": len(group_sorted),
"dropped_faces": max(len(group_sorted) - 1, 0),
"layouts": layout_set,
"primary_face": faces_payload[0] if faces_payload else {},
"removed_faces": removed_faces,
"theme_tags": sorted({tag for face in faces_payload for tag in face.get("themeTags", [])}),
"role_tags": sorted({tag for face in faces_payload for tag in face.get("roleTags", [])}),
"faces": faces_payload,
}
)
if drop_indices:
work_df = work_df.drop(index=drop_indices)
summary_payload = {
"color": color,
"group_count": merged_count,
"faces_dropped": len(drop_indices),
"multi_face_rows": int(multi_mask.sum()),
"entries": merge_details,
}
if recorder is not None:
try:
maybe_payload = recorder(summary_payload)
if isinstance(maybe_payload, dict):
summary_payload = maybe_payload
except Exception as exc:
if logger is not None:
logger.warning("Failed to record DFC merge summary for %s: %s", color, exc)
if logger is not None:
try:
logger.info(
"dfc_merge_summary %s",
json.dumps(
{
"event": "dfc_merge_summary",
"color": color,
"groups_merged": merged_count,
"faces_dropped": len(drop_indices),
"multi_face_rows": int(multi_mask.sum()),
},
sort_keys=True,
),
)
except Exception:
logger.info(
"dfc_merge_summary event=%s groups=%d dropped=%d rows=%d",
color,
merged_count,
len(drop_indices),
int(multi_mask.sum()),
)
logger.info(
"Merged %d multi-face card groups for %s (dropped %d extra faces)",
merged_count,
color,
len(drop_indices),
)
_persist_merge_summary(color, summary_payload, logger)
# Reset index to keep downstream expectations consistent.
return work_df.reset_index(drop=True)
def _persist_merge_summary(color: str, summary_payload: Dict[str, Any], logger=None) -> None:
try:
_SUMMARY_PATH.parent.mkdir(parents=True, exist_ok=True)
existing = load_merge_summary()
colors = existing.get("colors")
if not isinstance(colors, dict):
colors = {}
summary_payload = dict(summary_payload)
timestamp = datetime.now(UTC).isoformat(timespec="seconds")
summary_payload["timestamp"] = timestamp
colors[color] = summary_payload
existing["colors"] = colors
existing["updated_at"] = timestamp
with _SUMMARY_PATH.open("w", encoding="utf-8") as handle:
json.dump(existing, handle, indent=2, sort_keys=True)
except Exception as exc:
if logger is not None:
logger.warning("Failed to persist DFC merge summary: %s", exc)
def _sort_faces(group: pd.DataFrame) -> pd.DataFrame:
side_series = group.get("side", pd.Series(["" for _ in range(len(group))], index=group.index))
priority = side_series.fillna("").astype(str).str.lower().map(_SIDE_PRIORITY).fillna(3)
return group.assign(__face_order=priority).sort_values(
by=["__face_order", "faceName"], kind="mergesort"
).drop(columns=["__face_order"], errors="ignore")
def _merge_object_lists(values: Iterable[Any]) -> List[str]:
merged: Set[str] = set()
for value in values:
merged.update(_coerce_list(value))
return sorted(merged)
def _merge_keywords(values: Iterable[Any]) -> Set[str]:
merged: Set[str] = set()
for value in values:
merged.update(_split_keywords(value))
return merged
def _join_keywords(keywords: Set[str]) -> str:
if not keywords:
return ""
return ", ".join(sorted(keywords))
def _coerce_list(value: Any) -> List[str]:
if isinstance(value, list):
return [str(v) for v in value if str(v)]
if value is None or (isinstance(value, float) and pd.isna(value)):
return []
if isinstance(value, str):
stripped = value.strip()
if not stripped:
return []
try:
parsed = ast.literal_eval(stripped)
except (ValueError, SyntaxError):
parsed = None
if isinstance(parsed, list):
return [str(v) for v in parsed if str(v)]
return [part for part in (s.strip() for s in stripped.split(',')) if part]
return [str(value)]
def _split_keywords(value: Any) -> Set[str]:
if value is None or (isinstance(value, float) and pd.isna(value)):
return set()
if isinstance(value, list):
return {str(v).strip() for v in value if str(v).strip()}
if isinstance(value, str):
return {part.strip() for part in value.split(',') if part.strip()}
return {str(value).strip()}

View file

@ -1,9 +1,12 @@
from __future__ import annotations
# Standard library imports
import json
import os
import re
from typing import Union
from datetime import UTC, datetime
from pathlib import Path
from typing import Any, Dict, List, Union
# Third-party imports
import pandas as pd
@ -12,9 +15,11 @@ import pandas as pd
from . import tag_utils
from . import tag_constants
from .bracket_policy_applier import apply_bracket_policy_tags
from .multi_face_merger import merge_multi_face_rows
from settings import CSV_DIRECTORY, MULTIPLE_COPY_CARDS, COLORS
import logging_util
from file_setup import setup
from file_setup.setup_utils import enrich_commander_rows_with_tags
# Create logger for this module
logger = logging_util.logging.getLogger(__name__)
@ -22,6 +27,138 @@ logger.setLevel(logging_util.LOG_LEVEL)
logger.addHandler(logging_util.file_handler)
logger.addHandler(logging_util.stream_handler)
_MERGE_FLAG_RAW = str(os.getenv("ENABLE_DFC_MERGE", "") or "").strip().lower()
if _MERGE_FLAG_RAW in {"0", "false", "off", "disabled"}:
logger.warning(
"ENABLE_DFC_MERGE=%s is deprecated and no longer disables the merge; multi-face merge is always enabled.",
_MERGE_FLAG_RAW,
)
elif _MERGE_FLAG_RAW:
logger.info(
"ENABLE_DFC_MERGE=%s detected (deprecated); multi-face merge now runs unconditionally.",
_MERGE_FLAG_RAW,
)
_COMPAT_FLAG_RAW = os.getenv("DFC_COMPAT_SNAPSHOT")
if _COMPAT_FLAG_RAW is not None:
_COMPAT_FLAG_NORMALIZED = str(_COMPAT_FLAG_RAW or "").strip().lower()
DFC_COMPAT_SNAPSHOT = _COMPAT_FLAG_NORMALIZED not in {"0", "false", "off", "disabled"}
else:
DFC_COMPAT_SNAPSHOT = _MERGE_FLAG_RAW in {"compat", "dual", "both"}
_DFC_COMPAT_DIR = Path(os.getenv("DFC_COMPAT_DIR", "csv_files/compat_faces"))
_PER_FACE_SNAPSHOT_RAW = os.getenv("DFC_PER_FACE_SNAPSHOT")
if _PER_FACE_SNAPSHOT_RAW is not None:
_PER_FACE_SNAPSHOT_NORMALIZED = str(_PER_FACE_SNAPSHOT_RAW or "").strip().lower()
DFC_PER_FACE_SNAPSHOT = _PER_FACE_SNAPSHOT_NORMALIZED not in {"0", "false", "off", "disabled"}
else:
DFC_PER_FACE_SNAPSHOT = False
_DFC_PER_FACE_SNAPSHOT_PATH = Path(os.getenv("DFC_PER_FACE_SNAPSHOT_PATH", "logs/dfc_per_face_snapshot.json"))
_PER_FACE_SNAPSHOT_BUFFER: Dict[str, List[Dict[str, Any]]] = {}
def _record_per_face_snapshot(color: str, payload: Dict[str, Any]) -> None:
if not DFC_PER_FACE_SNAPSHOT:
return
entries = payload.get("entries")
if not isinstance(entries, list):
return
bucket = _PER_FACE_SNAPSHOT_BUFFER.setdefault(color, [])
for entry in entries:
if not isinstance(entry, dict):
continue
faces_data = []
raw_faces = entry.get("faces")
if isinstance(raw_faces, list):
for face in raw_faces:
if isinstance(face, dict):
faces_data.append({k: face.get(k) for k in (
"face",
"side",
"layout",
"type",
"text",
"mana_cost",
"mana_value",
"produces_mana",
"is_land",
"themeTags",
"roleTags",
)})
else:
faces_data.append(face)
primary_face = entry.get("primary_face")
if isinstance(primary_face, dict):
primary_face_copy = dict(primary_face)
else:
primary_face_copy = primary_face
removed_faces = entry.get("removed_faces")
if isinstance(removed_faces, list):
removed_faces_copy = [dict(face) if isinstance(face, dict) else face for face in removed_faces]
else:
removed_faces_copy = removed_faces
bucket.append(
{
"name": entry.get("name"),
"total_faces": entry.get("total_faces"),
"dropped_faces": entry.get("dropped_faces"),
"layouts": list(entry.get("layouts", [])) if isinstance(entry.get("layouts"), list) else entry.get("layouts"),
"primary_face": primary_face_copy,
"faces": faces_data,
"removed_faces": removed_faces_copy,
"theme_tags": entry.get("theme_tags"),
"role_tags": entry.get("role_tags"),
}
)
def _flush_per_face_snapshot() -> None:
if not DFC_PER_FACE_SNAPSHOT:
_PER_FACE_SNAPSHOT_BUFFER.clear()
return
if not _PER_FACE_SNAPSHOT_BUFFER:
return
try:
colors_payload = {color: list(entries) for color, entries in _PER_FACE_SNAPSHOT_BUFFER.items()}
payload = {
"generated_at": datetime.now(UTC).isoformat(timespec="seconds"),
"mode": "always_on",
"compat_snapshot": bool(DFC_COMPAT_SNAPSHOT),
"colors": colors_payload,
}
_DFC_PER_FACE_SNAPSHOT_PATH.parent.mkdir(parents=True, exist_ok=True)
with _DFC_PER_FACE_SNAPSHOT_PATH.open("w", encoding="utf-8") as handle:
json.dump(payload, handle, indent=2, sort_keys=True)
logger.info("Wrote per-face snapshot to %s", _DFC_PER_FACE_SNAPSHOT_PATH)
except Exception as exc:
logger.warning("Failed to write per-face snapshot: %s", exc)
finally:
_PER_FACE_SNAPSHOT_BUFFER.clear()
def _merge_summary_recorder(color: str):
def _recorder(payload: Dict[str, Any]) -> Dict[str, Any]:
enriched = dict(payload)
enriched["mode"] = "always_on"
enriched["compat_snapshot"] = bool(DFC_COMPAT_SNAPSHOT)
if DFC_PER_FACE_SNAPSHOT:
_record_per_face_snapshot(color, payload)
return enriched
return _recorder
def _write_compat_snapshot(df: pd.DataFrame, color: str) -> None:
try: # type: ignore[name-defined]
_DFC_COMPAT_DIR.mkdir(parents=True, exist_ok=True)
path = _DFC_COMPAT_DIR / f"{color}_cards_unmerged.csv"
df.to_csv(path, index=False)
logger.info("Wrote unmerged snapshot for %s to %s", color, path)
except Exception as exc:
logger.warning("Failed to write unmerged snapshot for %s: %s", color, exc)
### Setup
## Load the dataframe
def load_dataframe(color: str) -> None:
@ -178,6 +315,18 @@ def tag_by_color(df: pd.DataFrame, color: str) -> None:
apply_bracket_policy_tags(df)
print('\n====================\n')
# Merge multi-face entries before final ordering (feature-flagged)
if DFC_COMPAT_SNAPSHOT:
try:
_write_compat_snapshot(df.copy(deep=True), color)
except Exception:
pass
df = merge_multi_face_rows(df, color, logger=logger, recorder=_merge_summary_recorder(color))
if color == 'commander':
df = enrich_commander_rows_with_tags(df, CSV_DIRECTORY)
# Lastly, sort all theme tags for easier reading and reorder columns
df = sort_theme_tags(df, color)
df.to_csv(f'{CSV_DIRECTORY}/{color}_cards.csv', index=False)
@ -6915,6 +7064,9 @@ def run_tagging(parallel: bool = False, max_workers: int | None = None):
"""
start_time = pd.Timestamp.now()
if parallel and DFC_PER_FACE_SNAPSHOT:
logger.warning("DFC_PER_FACE_SNAPSHOT=1 detected; per-face metadata snapshots require sequential tagging. Parallel run will skip snapshot emission.")
if parallel:
try:
import concurrent.futures as _f
@ -6937,5 +7089,6 @@ def run_tagging(parallel: bool = False, max_workers: int | None = None):
for color in COLORS:
load_dataframe(color)
_flush_per_face_snapshot()
duration = (pd.Timestamp.now() - start_time).total_seconds()
logger.info(f'Tagged cards in {duration:.2f}s')

View file

@ -19,6 +19,7 @@ def _fake_session(**kw):
"prefer_combos": False,
"combo_target_count": 2,
"combo_balance": "mix",
"swap_mdfc_basics": False,
}
base.update(kw)
return base
@ -47,6 +48,7 @@ def test_start_ctx_from_session_minimal(monkeypatch):
assert "builder" in ctx
assert "stages" in ctx
assert "idx" in ctx
assert calls.get("swap_mdfc_basics") is False
def test_start_ctx_from_session_sets_on_session(monkeypatch):

View file

@ -0,0 +1,77 @@
from __future__ import annotations
from typing import Iterator
import pytest
from fastapi.testclient import TestClient
from code.web.app import app
@pytest.fixture()
def client() -> Iterator[TestClient]:
with TestClient(app) as test_client:
yield test_client
def test_candidate_list_includes_exclusion_warning(monkeypatch: pytest.MonkeyPatch, client: TestClient) -> None:
def fake_candidates(_: str, limit: int = 8):
return [("Sample Front", 10, ["G"])]
def fake_lookup(name: str):
if name == "Sample Front":
return {
"primary_face": "Sample Front",
"eligible_faces": ["Sample Back"],
"reason": "secondary_face_only",
}
return None
monkeypatch.setattr("code.web.routes.build.orch.commander_candidates", fake_candidates)
monkeypatch.setattr("code.web.routes.build.lookup_commander_detail", fake_lookup)
response = client.get("/build/new/candidates", params={"commander": "Sample"})
assert response.status_code == 200
body = response.text
assert "Use the back face &#39;Sample Back&#39; when building" in body
assert "data-name=\"Sample Back\"" in body
assert "data-display=\"Sample Front\"" in body
def test_front_face_submit_returns_modal_error(monkeypatch: pytest.MonkeyPatch, client: TestClient) -> None:
def fake_lookup(name: str):
if "Budoka" in name:
return {
"primary_face": "Budoka Gardener",
"eligible_faces": ["Dokai, Weaver of Life"],
"reason": "secondary_face_only",
}
return None
monkeypatch.setattr("code.web.routes.build.lookup_commander_detail", fake_lookup)
monkeypatch.setattr("code.web.routes.build.orch.bracket_options", lambda: [{"level": 3, "name": "Upgraded"}])
monkeypatch.setattr("code.web.routes.build.orch.ideal_labels", lambda: {})
monkeypatch.setattr("code.web.routes.build.orch.ideal_defaults", lambda: {})
def fail_select(name: str): # pragma: no cover - should not trigger
raise AssertionError(f"commander_select should not be called for {name}")
monkeypatch.setattr("code.web.routes.build.orch.commander_select", fail_select)
client.get("/build")
response = client.post(
"/build/new",
data={
"name": "",
"commander": "Budoka Gardener",
"bracket": "3",
"include_cards": "",
"exclude_cards": "",
"enforcement_mode": "warn",
},
)
assert response.status_code == 200
body = response.text
assert "can&#39;t lead a deck" in body
assert "Use &#39;Dokai, Weaver of Life&#39; as the commander instead" in body
assert "value=\"Dokai, Weaver of Life\"" in body

View file

@ -0,0 +1,221 @@
import ast
import json
from pathlib import Path
import pandas as pd
import pytest
import headless_runner as hr
from exceptions import CommanderValidationError
from file_setup import setup_utils as su
from file_setup.setup_utils import filter_dataframe, process_legendary_cards
import settings
@pytest.fixture
def tmp_csv_dir(tmp_path, monkeypatch):
monkeypatch.setattr(su, "CSV_DIRECTORY", str(tmp_path))
monkeypatch.setattr(settings, "CSV_DIRECTORY", str(tmp_path))
import importlib
setup_module = importlib.import_module("file_setup.setup")
monkeypatch.setattr(setup_module, "CSV_DIRECTORY", str(tmp_path))
return Path(tmp_path)
def _make_card_row(
*,
name: str,
face_name: str,
type_line: str,
side: str | None,
layout: str,
text: str = "",
power: str | None = None,
toughness: str | None = None,
) -> dict:
return {
"name": name,
"faceName": face_name,
"edhrecRank": 1000,
"colorIdentity": "B",
"colors": "B",
"manaCost": "3B",
"manaValue": 4,
"type": type_line,
"creatureTypes": "['Demon']" if "Creature" in type_line else "[]",
"text": text,
"power": power,
"toughness": toughness,
"keywords": "",
"themeTags": "[]",
"layout": layout,
"side": side,
"availability": "paper",
"promoTypes": "",
"securityStamp": "",
"printings": "SET",
}
def test_secondary_face_only_commander_removed(tmp_csv_dir):
name = "Elbrus, the Binding Blade // Withengar Unbound"
df = pd.DataFrame(
[
_make_card_row(
name=name,
face_name="Elbrus, the Binding Blade",
type_line="Legendary Artifact — Equipment",
side="a",
layout="transform",
),
_make_card_row(
name=name,
face_name="Withengar Unbound",
type_line="Legendary Creature — Demon",
side="b",
layout="transform",
power="13",
toughness="13",
),
]
)
processed = process_legendary_cards(df)
assert processed.empty
exclusion_path = tmp_csv_dir / ".commander_exclusions.json"
assert exclusion_path.exists(), "Expected commander exclusion diagnostics to be written"
data = json.loads(exclusion_path.read_text(encoding="utf-8"))
entries = data.get("secondary_face_only", [])
assert any(entry.get("name") == name for entry in entries)
def test_primary_face_retained_and_log_cleared(tmp_csv_dir):
name = "Birgi, God of Storytelling // Harnfel, Horn of Bounty"
df = pd.DataFrame(
[
_make_card_row(
name=name,
face_name="Birgi, God of Storytelling",
type_line="Legendary Creature — God",
side="a",
layout="modal_dfc",
power="3",
toughness="3",
),
_make_card_row(
name=name,
face_name="Harnfel, Horn of Bounty",
type_line="Legendary Artifact",
side="b",
layout="modal_dfc",
),
]
)
processed = process_legendary_cards(df)
assert len(processed) == 1
assert processed.iloc[0]["faceName"] == "Birgi, God of Storytelling"
# Downstream filter should continue to succeed with a single primary row
filtered = filter_dataframe(processed, [])
assert len(filtered) == 1
exclusion_path = tmp_csv_dir / ".commander_exclusions.json"
assert not exclusion_path.exists(), "No exclusion log expected when primary face remains"
def test_headless_validation_reports_secondary_face(monkeypatch):
monkeypatch.setattr(hr, "_load_commander_name_lookup", lambda: set())
exclusion_entry = {
"name": "Elbrus, the Binding Blade // Withengar Unbound",
"primary_face": "Elbrus, the Binding Blade",
"eligible_faces": ["Withengar Unbound"],
}
monkeypatch.setattr(hr, "lookup_commander_detail", lambda name: exclusion_entry if "Withengar" in name else None)
with pytest.raises(CommanderValidationError) as excinfo:
hr._validate_commander_available("Withengar Unbound")
message = str(excinfo.value)
assert "secondary face" in message.lower()
assert "Withengar" in message
def test_commander_theme_tags_enriched(tmp_csv_dir):
import importlib
setup_module = importlib.import_module("file_setup.setup")
name = "Eddie Brock // Venom, Lethal Protector"
front_face = "Venom, Eddie Brock"
back_face = "Venom, Lethal Protector"
cards_df = pd.DataFrame(
[
_make_card_row(
name=name,
face_name=front_face,
type_line="Legendary Creature — Symbiote",
side="a",
layout="modal_dfc",
power="3",
toughness="3",
text="Other creatures you control get +1/+1.",
),
_make_card_row(
name=name,
face_name=back_face,
type_line="Legendary Creature — Horror",
side="b",
layout="modal_dfc",
power="5",
toughness="5",
text="Menace",
),
]
)
cards_df.to_csv(tmp_csv_dir / "cards.csv", index=False)
color_df = pd.DataFrame(
[
{
"name": name,
"faceName": front_face,
"themeTags": "['Aggro', 'Counters']",
"creatureTypes": "['Human', 'Warrior']",
"roleTags": "['Commander']",
},
{
"name": name,
"faceName": back_face,
"themeTags": "['Graveyard']",
"creatureTypes": "['Demon']",
"roleTags": "['Finisher']",
},
]
)
color_df.to_csv(tmp_csv_dir / "black_cards.csv", index=False)
setup_module.determine_commanders()
commander_path = tmp_csv_dir / "commander_cards.csv"
assert commander_path.exists(), "Expected commander CSV to be generated"
commander_df = pd.read_csv(
commander_path,
converters={
"themeTags": ast.literal_eval,
"creatureTypes": ast.literal_eval,
"roleTags": ast.literal_eval,
},
)
assert "themeTags" in commander_df.columns
row = commander_df[commander_df["faceName"] == front_face].iloc[0]
assert set(row["themeTags"]) == {"Aggro", "Counters", "Graveyard"}
assert set(row["creatureTypes"]) == {"Human", "Warrior", "Demon"}
assert set(row["roleTags"]) == {"Commander", "Finisher"}

View file

@ -0,0 +1,80 @@
from __future__ import annotations
import csv
from pathlib import Path
import pytest
from code.deck_builder.phases.phase6_reporting import ReportingMixin
class DummyBuilder(ReportingMixin):
def __init__(self) -> None:
self.card_library = {
"Valakut Awakening // Valakut Stoneforge": {
"Card Type": "Instant",
"Count": 2,
"Mana Cost": "{2}{R}",
"Mana Value": "3",
"Role": "",
"Tags": [],
},
"Mountain": {
"Card Type": "Land",
"Count": 1,
"Mana Cost": "",
"Mana Value": "0",
"Role": "",
"Tags": [],
},
}
self.color_identity = ["R"]
self.output_func = lambda *_args, **_kwargs: None # silence export logs
self._full_cards_df = None
self._combined_cards_df = None
self.custom_export_base = "test_dfc_export"
@pytest.fixture()
def builder(monkeypatch: pytest.MonkeyPatch) -> DummyBuilder:
matrix = {
"Valakut Awakening // Valakut Stoneforge": {
"R": 1,
"_dfc_land": True,
"_dfc_counts_as_extra": True,
},
"Mountain": {"R": 1},
}
def _fake_compute(card_library, *_args, **_kwargs):
return matrix
monkeypatch.setattr(
"deck_builder.builder_utils.compute_color_source_matrix",
_fake_compute,
)
return DummyBuilder()
def test_export_decklist_csv_includes_dfc_note(tmp_path: Path, builder: DummyBuilder) -> None:
csv_path = Path(builder.export_decklist_csv(directory=str(tmp_path)))
with csv_path.open("r", encoding="utf-8", newline="") as handle:
reader = csv.DictReader(handle)
rows = {row["Name"]: row for row in reader}
valakut_row = rows["Valakut Awakening // Valakut Stoneforge"]
assert valakut_row["DFCNote"] == "MDFC: Adds extra land slot"
mountain_row = rows["Mountain"]
assert mountain_row["DFCNote"] == ""
def test_export_decklist_text_appends_dfc_annotation(tmp_path: Path, builder: DummyBuilder) -> None:
text_path = Path(builder.export_decklist_text(directory=str(tmp_path)))
lines = text_path.read_text(encoding="utf-8").splitlines()
valakut_line = next(line for line in lines if line.startswith("2 Valakut Awakening"))
assert "[MDFC: Adds extra land slot]" in valakut_line
mountain_line = next(line for line in lines if line.strip().endswith("Mountain"))
assert "MDFC" not in mountain_line

View file

@ -0,0 +1,150 @@
from __future__ import annotations
from typing import Dict, Any, List
import pytest
from jinja2 import Environment, FileSystemLoader, select_autoescape
from code.deck_builder.phases.phase6_reporting import ReportingMixin
from code.deck_builder.summary_telemetry import get_mdfc_metrics, _reset_metrics_for_test
class DummyBuilder(ReportingMixin):
def __init__(self, card_library: Dict[str, Dict[str, Any]], colors: List[str]):
self.card_library = card_library
self.color_identity = colors
self.output_lines: List[str] = []
self.output_func = self.output_lines.append # type: ignore[assignment]
self._full_cards_df = None
self._combined_cards_df = None
self.include_exclude_diagnostics = None
self.include_cards = []
self.exclude_cards = []
@pytest.fixture()
def sample_card_library() -> Dict[str, Dict[str, Any]]:
return {
"Mountain": {"Card Type": "Land", "Count": 35, "Mana Cost": "", "Role": "", "Tags": []},
"Branchloft Pathway // Boulderloft Pathway": {
"Card Type": "Land",
"Count": 1,
"Mana Cost": "",
"Role": "",
"Tags": [],
},
"Valakut Awakening // Valakut Stoneforge": {
"Card Type": "Instant",
"Count": 2,
"Mana Cost": "{2}{R}",
"Role": "",
"Tags": [],
},
"Cultivate": {"Card Type": "Sorcery", "Count": 1, "Mana Cost": "{2}{G}", "Role": "", "Tags": []},
}
@pytest.fixture()
def fake_matrix(monkeypatch):
matrix = {
"Mountain": {"R": 1},
"Branchloft Pathway // Boulderloft Pathway": {"G": 1, "W": 1, "_dfc_land": True},
"Valakut Awakening // Valakut Stoneforge": {
"R": 1,
"_dfc_land": True,
"_dfc_counts_as_extra": True,
},
"Cultivate": {},
}
def _fake_compute(card_library, *_):
return matrix
monkeypatch.setattr("deck_builder.builder_utils.compute_color_source_matrix", _fake_compute)
return matrix
@pytest.fixture(autouse=True)
def reset_mdfc_metrics():
_reset_metrics_for_test()
yield
_reset_metrics_for_test()
def test_build_deck_summary_includes_mdfc_totals(sample_card_library, fake_matrix):
builder = DummyBuilder(sample_card_library, ["R", "G"])
summary = builder.build_deck_summary()
land_summary = summary.get("land_summary")
assert land_summary["traditional"] == 36
assert land_summary["dfc_lands"] == 2
assert land_summary["with_dfc"] == 38
assert land_summary["headline"] == "Lands: 36 (38 with DFC)"
dfc_cards = {card["name"]: card for card in land_summary["dfc_cards"]}
branch = dfc_cards["Branchloft Pathway // Boulderloft Pathway"]
assert branch["count"] == 1
assert set(branch["colors"]) == {"G", "W"}
assert branch["adds_extra_land"] is False
assert branch["counts_as_land"] is True
assert branch["note"] == "Counts as land slot"
assert "faces" in branch
assert isinstance(branch["faces"], list) and branch["faces"]
assert all("mana_cost" in face for face in branch["faces"])
valakut = dfc_cards["Valakut Awakening // Valakut Stoneforge"]
assert valakut["count"] == 2
assert valakut["colors"] == ["R"]
assert valakut["adds_extra_land"] is True
assert valakut["counts_as_land"] is False
assert valakut["note"] == "Adds extra land slot"
assert any(face.get("produces_mana") for face in valakut.get("faces", []))
mana_cards = summary["mana_generation"]["cards"]
red_sources = {item["name"]: item for item in mana_cards["R"]}
assert red_sources["Valakut Awakening // Valakut Stoneforge"]["dfc"] is True
assert red_sources["Mountain"]["dfc"] is False
def test_cli_summary_mentions_mdfc_totals(sample_card_library, fake_matrix):
builder = DummyBuilder(sample_card_library, ["R", "G"])
builder.print_type_summary()
joined = "\n".join(builder.output_lines)
assert "Lands: 36 (38 with DFC)" in joined
assert "MDFC sources:" in joined
def test_deck_summary_template_renders_land_copy(sample_card_library, fake_matrix):
builder = DummyBuilder(sample_card_library, ["R", "G"])
summary = builder.build_deck_summary()
env = Environment(
loader=FileSystemLoader("code/web/templates"),
autoescape=select_autoescape(["html", "xml"]),
)
template = env.get_template("partials/deck_summary.html")
html = template.render(
summary=summary,
synergies=[],
game_changers=[],
owned_set=set(),
combos=[],
commander=None,
)
assert "Lands: 36 (38 with DFC)" in html
assert "DFC land" in html
def test_deck_summary_records_mdfc_telemetry(sample_card_library, fake_matrix):
builder = DummyBuilder(sample_card_library, ["R", "G"])
builder.build_deck_summary()
metrics = get_mdfc_metrics()
assert metrics["total_builds"] == 1
assert metrics["builds_with_mdfc"] == 1
assert metrics["total_mdfc_lands"] == 2
assert metrics["last_summary"]["dfc_lands"] == 2
top_cards = metrics.get("top_cards") or {}
assert top_cards.get("Valakut Awakening // Valakut Stoneforge") == 2
assert top_cards.get("Branchloft Pathway // Boulderloft Pathway") == 1

View file

@ -0,0 +1,45 @@
from __future__ import annotations
from types import MethodType
from deck_builder.builder import DeckBuilder
def _builder_with_forest() -> DeckBuilder:
builder = DeckBuilder(output_func=lambda *_: None, input_func=lambda *_: "", headless=True)
builder.card_library = {
"Forest": {"Card Name": "Forest", "Card Type": "Land", "Count": 5},
}
return builder
def _stub_modal_matrix(builder: DeckBuilder) -> None:
def fake_matrix(self: DeckBuilder):
return {
"Bala Ged Recovery": {"G": 1, "_dfc_counts_as_extra": True},
"Forest": {"G": 1},
}
builder._compute_color_source_matrix = MethodType(fake_matrix, builder) # type: ignore[attr-defined]
def test_modal_dfc_swaps_basic_when_enabled():
builder = _builder_with_forest()
builder.swap_mdfc_basics = True
_stub_modal_matrix(builder)
builder.add_card("Bala Ged Recovery", card_type="Instant")
assert builder.card_library["Forest"]["Count"] == 4
assert "Bala Ged Recovery" in builder.card_library
def test_modal_dfc_does_not_swap_when_disabled():
builder = _builder_with_forest()
builder.swap_mdfc_basics = False
_stub_modal_matrix(builder)
builder.add_card("Bala Ged Recovery", card_type="Instant")
assert builder.card_library["Forest"]["Count"] == 5
assert "Bala Ged Recovery" in builder.card_library

View file

@ -0,0 +1,192 @@
from __future__ import annotations
import pandas as pd
from code.tagging.multi_face_merger import merge_multi_face_rows
def _build_dataframe() -> pd.DataFrame:
return pd.DataFrame(
[
{
"name": "Eddie Brock // Venom, Lethal Protector",
"faceName": "Eddie Brock",
"edhrecRank": 12345.0,
"colorIdentity": "B",
"colors": "B",
"manaCost": "{3}{B}{B}",
"manaValue": 5.0,
"type": "Legendary Creature — Human",
"creatureTypes": ["Human"],
"text": "When Eddie Brock enters...",
"power": 3,
"toughness": 4,
"keywords": "Transform",
"themeTags": ["Aggro", "Control"],
"layout": "transform",
"side": "a",
"roleTags": ["Value Engine"],
},
{
"name": "Eddie Brock // Venom, Lethal Protector",
"faceName": "Venom, Lethal Protector",
"edhrecRank": 12345.0,
"colorIdentity": "B",
"colors": "B",
"manaCost": "",
"manaValue": 5.0,
"type": "Legendary Creature — Symbiote",
"creatureTypes": ["Symbiote"],
"text": "Whenever Venom attacks...",
"power": 5,
"toughness": 5,
"keywords": "Menace, Transform",
"themeTags": ["Menace", "Legends Matter"],
"layout": "transform",
"side": "b",
"roleTags": ["Finisher"],
},
{
"name": "Bonecrusher Giant // Stomp",
"faceName": "Bonecrusher Giant",
"edhrecRank": 6789.0,
"colorIdentity": "R",
"colors": "R",
"manaCost": "{2}{R}",
"manaValue": 3.0,
"type": "Creature — Giant",
"creatureTypes": ["Giant"],
"text": "Whenever this creature becomes the target...",
"power": 4,
"toughness": 3,
"keywords": "",
"themeTags": ["Aggro"],
"layout": "adventure",
"side": "a",
"roleTags": [],
},
{
"name": "Bonecrusher Giant // Stomp",
"faceName": "Stomp",
"edhrecRank": 6789.0,
"colorIdentity": "R",
"colors": "R",
"manaCost": "{1}{R}",
"manaValue": 2.0,
"type": "Instant — Adventure",
"creatureTypes": [],
"text": "Stomp deals 2 damage to any target.",
"power": None,
"toughness": None,
"keywords": "Instant",
"themeTags": ["Removal"],
"layout": "adventure",
"side": "b",
"roleTags": [],
},
{
"name": "Expansion // Explosion",
"faceName": "Expansion",
"edhrecRank": 4321.0,
"colorIdentity": "U, R",
"colors": "U, R",
"manaCost": "{U/R}{U/R}",
"manaValue": 2.0,
"type": "Instant",
"creatureTypes": [],
"text": "Copy target instant or sorcery spell...",
"power": None,
"toughness": None,
"keywords": "",
"themeTags": ["Spell Copy"],
"layout": "split",
"side": "a",
"roleTags": ["Copy Enabler"],
},
{
"name": "Expansion // Explosion",
"faceName": "Explosion",
"edhrecRank": 4321.0,
"colorIdentity": "U, R",
"colors": "U, R",
"manaCost": "{X}{X}{U}{R}",
"manaValue": 4.0,
"type": "Instant",
"creatureTypes": [],
"text": "Explosion deals X damage to any target...",
"power": None,
"toughness": None,
"keywords": "",
"themeTags": ["Burn", "Card Draw"],
"layout": "split",
"side": "b",
"roleTags": ["Finisher"],
},
{
"name": "Persistent Petitioners",
"faceName": "Persistent Petitioners",
"edhrecRank": 5555.0,
"colorIdentity": "U",
"colors": "U",
"manaCost": "{1}{U}",
"manaValue": 2.0,
"type": "Creature — Human Advisor",
"creatureTypes": ["Human", "Advisor"],
"text": "{1}{U}, Tap four untapped Advisors you control: Mill 12.",
"power": 1,
"toughness": 3,
"keywords": "",
"themeTags": ["Mill"],
"layout": "normal",
"side": "",
"roleTags": ["Mill Enabler"],
},
]
)
def test_merge_multi_face_rows_combines_themes_and_keywords():
df = _build_dataframe()
merged = merge_multi_face_rows(df, "grixis", logger=None)
# Eddie Brock merge assertions
eddie = merged[merged["name"] == "Eddie Brock // Venom, Lethal Protector"].iloc[0]
assert set(eddie["themeTags"]) == {
"Aggro",
"Control",
"Legends Matter",
"Menace",
}
assert set(eddie["creatureTypes"]) == {"Human", "Symbiote"}
assert eddie["keywords"] == "Menace, Transform"
assert (merged["faceName"] == "Venom, Lethal Protector").sum() == 0
# Bonecrusher Giant adventure merge assertions
bonecrusher = merged[merged["name"] == "Bonecrusher Giant // Stomp"].iloc[0]
assert set(bonecrusher["themeTags"]) == {"Aggro", "Removal"}
assert set(bonecrusher["creatureTypes"]) == {"Giant"}
assert bonecrusher["keywords"] == "Instant"
assert (merged["faceName"] == "Stomp").sum() == 0
# Split card merge assertions
explosion = merged[merged["name"] == "Expansion // Explosion"].iloc[0]
assert set(explosion["themeTags"]) == {"Spell Copy", "Burn", "Card Draw"}
assert set(explosion["roleTags"]) == {"Copy Enabler", "Finisher"}
assert (merged["faceName"] == "Explosion").sum() == 0
# Persistent Petitioners should remain untouched
petitioners = merged[merged["name"] == "Persistent Petitioners"].iloc[0]
assert petitioners["themeTags"] == ["Mill"]
assert petitioners["roleTags"] == ["Mill Enabler"]
assert "faceDetails" not in merged.columns
assert len(merged) == 4
def test_merge_multi_face_rows_is_idempotent():
df = _build_dataframe()
once = merge_multi_face_rows(df, "izzet", logger=None)
twice = merge_multi_face_rows(once, "izzet", logger=None)
pd.testing.assert_frame_equal(once, twice)

View file

@ -0,0 +1,40 @@
import pandas as pd
from file_setup.setup_utils import filter_dataframe
def _record(name: str, security_stamp: str) -> dict[str, object]:
return {
"name": name,
"faceName": name,
"edhrecRank": 100,
"colorIdentity": "G",
"colors": "G",
"manaCost": "{G}",
"manaValue": 1,
"type": "Creature",
"layout": "normal",
"text": "",
"power": "1",
"toughness": "1",
"keywords": "",
"side": "a",
"availability": "paper,arena",
"promoTypes": "",
"securityStamp": security_stamp,
"printings": "RNA",
}
def test_filter_dataframe_removes_acorn_and_heart_security_stamps() -> None:
df = pd.DataFrame(
[
_record("Acorn Card", "Acorn"),
_record("Heart Card", "heart"),
_record("Legal Card", ""),
]
)
filtered = filter_dataframe(df, banned_cards=[])
assert list(filtered["name"]) == ["Legal Card"]

View file

@ -15,6 +15,9 @@ from starlette.exceptions import HTTPException as StarletteHTTPException
from starlette.middleware.gzip import GZipMiddleware
from typing import Any, Optional, Dict, Iterable, Mapping
from contextlib import asynccontextmanager
from code.deck_builder.summary_telemetry import get_mdfc_metrics
from tagging.multi_face_merger import load_merge_summary
from .services.combo_utils import detect_all as _detect_all
from .services.theme_catalog_loader import prewarm_common_filters # type: ignore
from .services.tasks import get_session, new_sid, set_session_value # type: ignore
@ -873,6 +876,17 @@ async def status_random_theme_stats():
return JSONResponse({"ok": False, "error": "internal_error"}, status_code=500)
@app.get("/status/dfc_metrics")
async def status_dfc_metrics():
if not SHOW_DIAGNOSTICS:
raise HTTPException(status_code=404, detail="Not Found")
try:
return JSONResponse({"ok": True, "metrics": get_mdfc_metrics()})
except Exception as exc: # pragma: no cover - defensive log
logging.getLogger("web").warning("Failed to fetch MDFC metrics: %s", exc, exc_info=True)
return JSONResponse({"ok": False, "error": "internal_error"}, status_code=500)
def random_modes_enabled() -> bool:
"""Dynamic check so tests that set env after import still work.
@ -2352,7 +2366,13 @@ async def trigger_error(kind: str = Query("http")):
async def diagnostics_home(request: Request) -> HTMLResponse:
if not SHOW_DIAGNOSTICS:
raise HTTPException(status_code=404, detail="Not Found")
return templates.TemplateResponse("diagnostics/index.html", {"request": request})
return templates.TemplateResponse(
"diagnostics/index.html",
{
"request": request,
"merge_summary": load_merge_summary(),
},
)
@app.get("/diagnostics/perf", response_class=HTMLResponse)

View file

@ -27,6 +27,7 @@ from path_util import csv_dir as _csv_dir
from ..services.alts_utils import get_cached as _alts_get_cached, set_cached as _alts_set_cached
from ..services.telemetry import log_commander_create_deck
from urllib.parse import urlparse
from commander_exclusions import lookup_commander_detail
# Cache for available card names used by validation endpoints
_AVAILABLE_CARDS_CACHE: set[str] | None = None
@ -150,6 +151,7 @@ def _rebuild_ctx_with_multicopy(sess: dict) -> None:
prefer_combos=bool(sess.get("prefer_combos")),
combo_target_count=int(sess.get("combo_target_count", 2)),
combo_balance=str(sess.get("combo_balance", "mix")),
swap_mdfc_basics=bool(sess.get("swap_mdfc_basics")),
)
except Exception:
# If rebuild fails (e.g., commander not found in test), fall back to injecting
@ -415,12 +417,22 @@ async def multicopy_save(
async def build_new_modal(request: Request) -> HTMLResponse:
"""Return the New Deck modal content (for an overlay)."""
sid = request.cookies.get("sid") or new_sid()
sess = get_session(sid)
ctx = {
"request": request,
"brackets": orch.bracket_options(),
"labels": orch.ideal_labels(),
"defaults": orch.ideal_defaults(),
"allow_must_haves": ALLOW_MUST_HAVES, # Add feature flag
"form": {
"prefer_combos": bool(sess.get("prefer_combos")),
"combo_count": sess.get("combo_target_count"),
"combo_balance": sess.get("combo_balance"),
"enable_multicopy": bool(sess.get("multi_copy")),
"use_owned_only": bool(sess.get("use_owned_only")),
"prefer_owned": bool(sess.get("prefer_owned")),
"swap_mdfc_basics": bool(sess.get("swap_mdfc_basics")),
},
}
resp = templates.TemplateResponse("build/_new_deck_modal.html", ctx)
resp.set_cookie("sid", sid, httponly=True, samesite="lax")
@ -432,7 +444,38 @@ async def build_new_candidates(request: Request, commander: str = Query("")) ->
"""Return a small list of commander candidates for the modal live search."""
q = (commander or "").strip()
items = orch.commander_candidates(q, limit=8) if q else []
ctx = {"request": request, "query": q, "candidates": items}
candidates: list[dict[str, Any]] = []
for name, score, colors in items:
detail = lookup_commander_detail(name)
preferred = name
warning = None
if detail:
eligible_raw = detail.get("eligible_faces")
eligible = [str(face).strip() for face in eligible_raw or [] if str(face).strip()] if isinstance(eligible_raw, list) else []
norm_name = str(name).strip().casefold()
eligible_norms = [face.casefold() for face in eligible]
if eligible and norm_name not in eligible_norms:
preferred = eligible[0]
primary = str(detail.get("primary_face") or detail.get("name") or name).strip()
if len(eligible) == 1:
warning = (
f"Use the back face '{preferred}' when building. Front face '{primary}' can't lead a deck."
)
else:
faces = ", ".join(f"'{face}'" for face in eligible)
warning = (
f"This commander only works from specific faces: {faces}."
)
candidates.append(
{
"display": name,
"value": preferred,
"score": score,
"colors": colors,
"warning": warning,
}
)
ctx = {"request": request, "query": q, "candidates": candidates}
return templates.TemplateResponse("build/_new_deck_candidates.html", ctx)
@ -445,6 +488,7 @@ async def build_new_inspect(request: Request, name: str = Query(...)) -> HTMLRes
tags = orch.tags_for_commander(info["name"]) or []
recommended = orch.recommended_tags_for_commander(info["name"]) if tags else []
recommended_reasons = orch.recommended_tag_reasons_for_commander(info["name"]) if tags else {}
exclusion_detail = lookup_commander_detail(info["name"])
# Render tags slot content and OOB commander preview simultaneously
# Game Changer flag for this commander (affects bracket UI in modal via tags partial consumer)
is_gc = False
@ -454,7 +498,7 @@ async def build_new_inspect(request: Request, name: str = Query(...)) -> HTMLRes
is_gc = False
ctx = {
"request": request,
"commander": {"name": info["name"]},
"commander": {"name": info["name"], "exclusion": exclusion_detail},
"tags": tags,
"recommended": recommended,
"recommended_reasons": recommended_reasons,
@ -553,6 +597,9 @@ async def build_new_submit(
combo_count: int | None = Form(None),
combo_balance: str | None = Form(None),
enable_multicopy: bool = Form(False),
use_owned_only: bool = Form(False),
prefer_owned: bool = Form(False),
swap_mdfc_basics: bool = Form(False),
# Integrated Multi-Copy (optional)
multi_choice_id: str | None = Form(None),
multi_count: int | None = Form(None),
@ -567,6 +614,57 @@ async def build_new_submit(
"""Handle New Deck modal submit and immediately start the build (skip separate review page)."""
sid = request.cookies.get("sid") or new_sid()
sess = get_session(sid)
def _form_state(commander_value: str) -> dict[str, Any]:
return {
"name": name,
"commander": commander_value,
"primary_tag": primary_tag or "",
"secondary_tag": secondary_tag or "",
"tertiary_tag": tertiary_tag or "",
"tag_mode": tag_mode or "AND",
"bracket": bracket,
"combo_count": combo_count,
"combo_balance": (combo_balance or "mix"),
"prefer_combos": bool(prefer_combos),
"enable_multicopy": bool(enable_multicopy),
"use_owned_only": bool(use_owned_only),
"prefer_owned": bool(prefer_owned),
"swap_mdfc_basics": bool(swap_mdfc_basics),
"include_cards": include_cards or "",
"exclude_cards": exclude_cards or "",
"enforcement_mode": enforcement_mode or "warn",
"allow_illegal": bool(allow_illegal),
"fuzzy_matching": bool(fuzzy_matching),
}
commander_detail = lookup_commander_detail(commander)
if commander_detail:
eligible_raw = commander_detail.get("eligible_faces")
eligible_faces = [str(face).strip() for face in eligible_raw or [] if str(face).strip()] if isinstance(eligible_raw, list) else []
if eligible_faces:
norm_input = str(commander).strip().casefold()
eligible_norms = [face.casefold() for face in eligible_faces]
if norm_input not in eligible_norms:
suggested = eligible_faces[0]
primary_face = str(commander_detail.get("primary_face") or commander_detail.get("name") or commander).strip()
faces_str = ", ".join(f"'{face}'" for face in eligible_faces)
error_msg = (
f"'{primary_face or commander}' can't lead a deck. Use {faces_str} as the commander instead. "
"We've updated the commander field for you."
)
ctx = {
"request": request,
"error": error_msg,
"brackets": orch.bracket_options(),
"labels": orch.ideal_labels(),
"defaults": orch.ideal_defaults(),
"allow_must_haves": ALLOW_MUST_HAVES,
"form": _form_state(suggested),
}
resp = templates.TemplateResponse("build/_new_deck_modal.html", ctx)
resp.set_cookie("sid", sid, httponly=True, samesite="lax")
return resp
# Normalize and validate commander selection (best-effort via orchestrator)
sel = orch.commander_select(commander)
if not sel.get("ok"):
@ -578,23 +676,7 @@ async def build_new_submit(
"labels": orch.ideal_labels(),
"defaults": orch.ideal_defaults(),
"allow_must_haves": ALLOW_MUST_HAVES, # Add feature flag
"form": {
"name": name,
"commander": commander,
"primary_tag": primary_tag or "",
"secondary_tag": secondary_tag or "",
"tertiary_tag": tertiary_tag or "",
"tag_mode": tag_mode or "AND",
"bracket": bracket,
"combo_count": combo_count,
"combo_balance": (combo_balance or "mix"),
"prefer_combos": bool(prefer_combos),
"include_cards": include_cards or "",
"exclude_cards": exclude_cards or "",
"enforcement_mode": enforcement_mode or "warn",
"allow_illegal": bool(allow_illegal),
"fuzzy_matching": bool(fuzzy_matching),
}
"form": _form_state(commander),
}
resp = templates.TemplateResponse("build/_new_deck_modal.html", ctx)
resp.set_cookie("sid", sid, httponly=True, samesite="lax")
@ -654,6 +736,18 @@ async def build_new_submit(
sess["prefer_combos"] = bool(prefer_combos)
except Exception:
sess["prefer_combos"] = False
try:
sess["use_owned_only"] = bool(use_owned_only)
except Exception:
sess["use_owned_only"] = False
try:
sess["prefer_owned"] = bool(prefer_owned)
except Exception:
sess["prefer_owned"] = False
try:
sess["swap_mdfc_basics"] = bool(swap_mdfc_basics)
except Exception:
sess["swap_mdfc_basics"] = False
# Combos config from modal
try:
if combo_count is not None:
@ -1267,6 +1361,9 @@ async def build_step3_submit(
"labels": labels,
"values": submitted,
"commander": sess.get("commander"),
"owned_only": bool(sess.get("use_owned_only")),
"prefer_owned": bool(sess.get("prefer_owned")),
"swap_mdfc_basics": bool(sess.get("swap_mdfc_basics")),
},
)
resp.set_cookie("sid", sid, httponly=True, samesite="lax")
@ -1313,6 +1410,7 @@ async def build_step4_get(request: Request) -> HTMLResponse:
"commander": commander,
"owned_only": bool(sess.get("use_owned_only")),
"prefer_owned": bool(sess.get("prefer_owned")),
"swap_mdfc_basics": bool(sess.get("swap_mdfc_basics")),
},
)
@ -1485,6 +1583,7 @@ async def build_toggle_owned_review(
request: Request,
use_owned_only: str | None = Form(None),
prefer_owned: str | None = Form(None),
swap_mdfc_basics: str | None = Form(None),
) -> HTMLResponse:
"""Toggle 'use owned only' and/or 'prefer owned' flags from the Review step and re-render Step 4."""
sid = request.cookies.get("sid") or new_sid()
@ -1492,8 +1591,10 @@ async def build_toggle_owned_review(
sess["last_step"] = 4
only_val = True if (use_owned_only and str(use_owned_only).strip() in ("1","true","on","yes")) else False
pref_val = True if (prefer_owned and str(prefer_owned).strip() in ("1","true","on","yes")) else False
swap_val = True if (swap_mdfc_basics and str(swap_mdfc_basics).strip() in ("1","true","on","yes")) else False
sess["use_owned_only"] = only_val
sess["prefer_owned"] = pref_val
sess["swap_mdfc_basics"] = swap_val
# Do not touch build_ctx here; user hasn't started the build yet from review
labels = orch.ideal_labels()
values = sess.get("ideals") or orch.ideal_defaults()
@ -1507,6 +1608,7 @@ async def build_toggle_owned_review(
"commander": commander,
"owned_only": bool(sess.get("use_owned_only")),
"prefer_owned": bool(sess.get("prefer_owned")),
"swap_mdfc_basics": bool(sess.get("swap_mdfc_basics")),
},
)
resp.set_cookie("sid", sid, httponly=True, samesite="lax")
@ -2888,6 +2990,7 @@ async def build_permalink(request: Request):
"flags": {
"owned_only": bool(sess.get("use_owned_only")),
"prefer_owned": bool(sess.get("prefer_owned")),
"swap_mdfc_basics": bool(sess.get("swap_mdfc_basics")),
},
"locks": list(sess.get("locks", [])),
}
@ -2974,6 +3077,7 @@ async def build_from(request: Request, state: str | None = None) -> HTMLResponse
flags = data.get("flags") or {}
sess["use_owned_only"] = bool(flags.get("owned_only"))
sess["prefer_owned"] = bool(flags.get("prefer_owned"))
sess["swap_mdfc_basics"] = bool(flags.get("swap_mdfc_basics"))
sess["locks"] = list(data.get("locks", []))
# Optional random build rehydration
try:
@ -3037,6 +3141,7 @@ async def build_from(request: Request, state: str | None = None) -> HTMLResponse
"commander": sess.get("commander"),
"owned_only": bool(sess.get("use_owned_only")),
"prefer_owned": bool(sess.get("prefer_owned")),
"swap_mdfc_basics": bool(sess.get("swap_mdfc_basics")),
"locks_restored": locks_restored,
})
resp.set_cookie("sid", sid, httponly=True, samesite="lax")

View file

@ -528,3 +528,13 @@ async def commanders_index(
except Exception:
pass
return templates.TemplateResponse(template_name, context)
@router.get("", response_class=HTMLResponse)
async def commanders_index_alias(
request: Request,
q: str | None = Query(default=None, alias="q"),
theme: str | None = Query(default=None, alias="theme"),
color: str | None = Query(default=None, alias="color"),
page: int = Query(default=1, ge=1),
) -> HTMLResponse:
return await commanders_index(request, q=q, theme=theme, color=color, page=page)

View file

@ -27,6 +27,7 @@ def step5_base_ctx(request: Request, sess: dict, *, include_name: bool = True, i
"prefer_combos": bool(sess.get("prefer_combos")),
"combo_target_count": int(sess.get("combo_target_count", 2)),
"combo_balance": str(sess.get("combo_balance", "mix")),
"swap_mdfc_basics": bool(sess.get("swap_mdfc_basics")),
}
if include_name:
ctx["name"] = sess.get("custom_export_base")
@ -85,6 +86,7 @@ def start_ctx_from_session(sess: dict, *, set_on_session: bool = True) -> Dict[s
combo_balance=str(sess.get("combo_balance", "mix")),
include_cards=sess.get("include_cards"),
exclude_cards=sess.get("exclude_cards"),
swap_mdfc_basics=bool(sess.get("swap_mdfc_basics")),
)
if set_on_session:
sess["build_ctx"] = ctx

View file

@ -1847,6 +1847,7 @@ def start_build_ctx(
combo_balance: str | None = None,
include_cards: List[str] | None = None,
exclude_cards: List[str] | None = None,
swap_mdfc_basics: bool | None = None,
) -> Dict[str, Any]:
logs: List[str] = []
@ -1914,6 +1915,11 @@ def start_build_ctx(
except Exception:
pass
try:
b.swap_mdfc_basics = bool(swap_mdfc_basics)
except Exception:
pass
# Data load
b.determine_color_identity()
b.setup_dataframes()
@ -1980,6 +1986,7 @@ def start_build_ctx(
"history": [], # list of {i, key, label, snapshot}
"locks": {str(n).strip().lower() for n in (locks or []) if str(n).strip()},
"custom_export_base": str(custom_export_base).strip() if isinstance(custom_export_base, str) and custom_export_base.strip() else None,
"swap_mdfc_basics": bool(swap_mdfc_basics),
}
return ctx

View file

@ -662,7 +662,7 @@
window.__dfcFlipCard = function(card){ if(!card) return; flip(card, card.querySelector('.dfc-toggle')); };
window.__dfcGetFace = function(card){ if(!card) return 'front'; return card.getAttribute(FACE_ATTR) || 'front'; };
function scan(){
document.querySelectorAll('.card-sample, .commander-cell, .card-tile, .candidate-tile, .stack-card, .card-preview, .owned-row, .list-row').forEach(ensureButton);
document.querySelectorAll('.card-sample, .commander-cell, .commander-thumb, .card-tile, .candidate-tile, .stack-card, .card-preview, .owned-row, .list-row').forEach(ensureButton);
}
document.addEventListener('pointermove', function(e){ window.__lastPointerEvent = e; }, { passive:true });
document.addEventListener('DOMContentLoaded', scan);
@ -1206,9 +1206,9 @@
if(!el) return null;
// If inside flip button
var btn = el.closest && el.closest('.dfc-toggle');
if(btn) return btn.closest('.card-sample, .commander-cell, .card-tile, .candidate-tile, .card-preview, .stack-card');
if(btn) return btn.closest('.card-sample, .commander-cell, .commander-thumb, .card-tile, .candidate-tile, .card-preview, .stack-card');
// Recognized container classes (add .stack-card for finished/random deck thumbnails)
var container = el.closest && el.closest('.card-sample, .commander-cell, .card-tile, .candidate-tile, .card-preview, .stack-card');
var container = el.closest && el.closest('.card-sample, .commander-cell, .commander-thumb, .card-tile, .candidate-tile, .card-preview, .stack-card');
if(container) return container;
// Image-based detection (any card image carrying data-card-name)
if(el.matches && (el.matches('img.card-thumb') || el.matches('img[data-card-name]') || el.classList.contains('commander-img'))){
@ -1264,12 +1264,12 @@
window.hoverShowByName = function(name){
try {
var el = document.querySelector('[data-card-name="'+CSS.escape(name)+'"]');
if(el){ window.__hoverShowCard(el.closest('.card-sample, .commander-cell, .card-tile, .candidate-tile, .card-preview, .stack-card') || el); }
if(el){ window.__hoverShowCard(el.closest('.card-sample, .commander-cell, .commander-thumb, .card-tile, .candidate-tile, .card-preview, .stack-card') || el); }
} catch(_) {}
};
// Keyboard accessibility & focus traversal (P2 UI Hover keyboard accessibility)
document.addEventListener('focusin', function(e){ var card=e.target.closest && e.target.closest('.card-sample, .commander-cell'); if(card){ show(card, {clientX:card.getBoundingClientRect().left+10, clientY:card.getBoundingClientRect().top+10}); }});
document.addEventListener('focusout', function(e){ var next=e.relatedTarget && e.relatedTarget.closest && e.relatedTarget.closest('.card-sample, .commander-cell'); if(!next) hide(); });
document.addEventListener('focusin', function(e){ var card=e.target.closest && e.target.closest('.card-sample, .commander-cell, .commander-thumb'); if(card){ show(card, {clientX:card.getBoundingClientRect().left+10, clientY:card.getBoundingClientRect().top+10}); }});
document.addEventListener('focusout', function(e){ var next=e.relatedTarget && e.relatedTarget.closest && e.relatedTarget.closest('.card-sample, .commander-cell, .commander-thumb'); if(!next) hide(); });
document.addEventListener('keydown', function(e){ if(e.key==='Escape') hide(); });
// Compact mode event listener
document.addEventListener('mtg:hoverCompactToggle', function(){ panel.classList.toggle('compact-img', !!window.__hoverCompactMode); });

View file

@ -1,13 +1,19 @@
{% if candidates and candidates|length %}
<ul style="list-style:none; padding:0; margin:.35rem 0; display:grid; gap:.25rem;" role="listbox" aria-label="Commander suggestions" tabindex="-1">
{% for name, score, colors in candidates %}
{% for cand in candidates %}
<li>
<button type="button" id="cand-{{ loop.index0 }}" class="chip candidate-btn" role="option" data-idx="{{ loop.index0 }}" data-name="{{ name|e }}"
hx-get="/build/new/inspect?name={{ name|urlencode }}"
<button type="button" id="cand-{{ loop.index0 }}" class="chip candidate-btn" role="option" data-idx="{{ loop.index0 }}" data-name="{{ cand.value|e }}" data-display="{{ cand.display|e }}"
hx-get="/build/new/inspect?name={{ cand.display|urlencode }}"
hx-target="#newdeck-tags-slot" hx-swap="innerHTML"
hx-on="htmx:afterOnLoad: (function(){ try{ var n=this.getAttribute('data-name')||''; var ci = document.querySelector('input[name=commander]'); if(ci){ ci.value=n; try{ ci.selectionStart = ci.selectionEnd = ci.value.length; }catch(_){} } var nm = document.querySelector('input[name=name]'); if(nm && (!nm.value || !nm.value.trim())){ nm.value=n; } }catch(_){ } }).call(this)">
{{ name }}
hx-on="htmx:afterOnLoad: (function(){ try{ var preferred=this.getAttribute('data-name')||''; var displayed=this.getAttribute('data-display')||preferred; var ci = document.querySelector('input[name=commander]'); if(ci){ ci.value=preferred; try{ ci.selectionStart = ci.selectionEnd = ci.value.length; }catch(_){} try{ ci.dispatchEvent(new Event('input', { bubbles: true })); }catch(_){ } } var nm = document.querySelector('input[name=name]'); if(nm && (!nm.value || !nm.value.trim())){ nm.value=displayed; } }catch(_){ } }).call(this)">
{{ cand.display }}
{% if cand.warning %}
<span aria-hidden="true" style="margin-left:.35rem; font-size:11px; color:#facc15;"></span>
{% endif %}
</button>
{% if cand.warning %}
<div class="muted" style="font-size:11px; margin:.25rem 0 0 .5rem; color:#facc15;" role="note">⚠ {{ cand.warning }}</div>
{% endif %}
</li>
{% endfor %}
</ul>

View file

@ -55,9 +55,9 @@
<fieldset>
<legend>Preferences</legend>
<div style="text-align: left;">
<div style="margin-bottom: 1rem;">
<label style="display: inline-flex; align-items: center; gap: 0.5rem; margin: 0;" title="When enabled, the builder will try to auto-complete missing combo partners near the end of the build (respecting owned-only and locks).">
<input type="checkbox" name="prefer_combos" id="pref-combos-chk" style="margin: 0;" />
<div style="margin-bottom: 1rem; display:flex; flex-direction:column; gap:0.75rem;">
<label for="pref-combos-chk" style="display:grid; grid-template-columns:auto 1fr; align-items:center; column-gap:0.5rem; margin:0; width:100%; cursor:pointer; text-align:left;" title="When enabled, the builder will try to auto-complete missing combo partners near the end of the build (respecting owned-only and locks).">
<input type="checkbox" name="prefer_combos" id="pref-combos-chk" value="1" style="margin:0; cursor:pointer;" {% if form and form.prefer_combos %}checked{% endif %} />
<span>Prioritize combos</span>
</label>
<div id="pref-combos-config" style="margin-top: 0.5rem; margin-left: 1.5rem; padding: 0.5rem; border: 1px solid var(--border); border-radius: 8px; display: none;">
@ -80,12 +80,24 @@
</div>
</div>
</div>
</div>
<div style="margin-bottom: 1rem;">
<label style="display: inline-flex; align-items: center; gap: 0.5rem; margin: 0;" title="When enabled, include a Multi-Copy package for matching archetypes (e.g., tokens/tribal).">
<input type="checkbox" name="enable_multicopy" id="pref-mc-chk" style="margin: 0;" />
<label for="pref-mc-chk" style="display:grid; grid-template-columns:auto 1fr; align-items:center; column-gap:0.5rem; margin:0; width:100%; cursor:pointer; text-align:left;" title="When enabled, include a Multi-Copy package for matching archetypes (e.g., tokens/tribal).">
<input type="checkbox" name="enable_multicopy" id="pref-mc-chk" value="1" style="margin:0; cursor:pointer;" {% if form and form.enable_multicopy %}checked{% endif %} />
<span>Enable Multi-Copy package</span>
</label>
<div style="display:flex; flex-direction:column; gap:0.5rem; margin-top:0.75rem;">
<label for="use-owned-chk" style="display:grid; grid-template-columns:auto 1fr; align-items:center; column-gap:0.5rem; margin:0; width:100%; cursor:pointer; text-align:left;" title="Limit the pool to cards you already own. Cards outside your owned library will be skipped.">
<input type="checkbox" name="use_owned_only" id="use-owned-chk" value="1" style="margin:0; cursor:pointer;" {% if form and form.use_owned_only %}checked{% endif %} />
<span>Use only owned cards</span>
</label>
<label for="prefer-owned-chk" style="display:grid; grid-template-columns:auto 1fr; align-items:center; column-gap:0.5rem; margin:0; width:100%; cursor:pointer; text-align:left;" title="Still allow unowned cards, but rank owned cards higher when choosing picks.">
<input type="checkbox" name="prefer_owned" id="prefer-owned-chk" value="1" style="margin:0; cursor:pointer;" {% if form and form.prefer_owned %}checked{% endif %} />
<span>Prefer owned cards (allow unowned fallback)</span>
</label>
<label for="swap-mdfc-chk" style="display:grid; grid-template-columns:auto 1fr; align-items:center; column-gap:0.5rem; margin:0; width:100%; cursor:pointer; text-align:left;" title="When enabled, modal DFC lands will replace a matching basic land as they are added so land counts stay level without manual trims.">
<input type="checkbox" name="swap_mdfc_basics" id="swap-mdfc-chk" value="1" style="margin:0; cursor:pointer;" {% if form and form.swap_mdfc_basics %}checked{% endif %} />
<span>Swap basics for MDFC lands</span>
</label>
</div>
</div>
</div>
</fieldset>

View file

@ -15,6 +15,27 @@
</script>
</div>
{% set exclusion = commander.exclusion if commander is defined and commander.exclusion is defined else None %}
{% if exclusion %}
{% set eligible_raw = exclusion.eligible_faces if exclusion.eligible_faces is defined else [] %}
{% set eligible_list = eligible_raw if eligible_raw is iterable else [] %}
{% set eligible_lower = eligible_list | map('lower') | list %}
{% set current_lower = commander.name|lower %}
{% if eligible_list and (current_lower not in eligible_lower or exclusion.reason == 'secondary_face_only') %}
<div class="muted" style="font-size:12px; margin-top:.35rem; color:#facc15;" role="note">
{% if eligible_list|length == 1 %}
⚠ This commander only works from '{{ eligible_list[0] }}'.
{% if exclusion.primary_face and exclusion.primary_face|lower != eligible_list[0]|lower %}
Front face '{{ exclusion.primary_face }}' can't lead a deck.
{% endif %}
We'll build using the supported face automatically.
{% else %}
⚠ This commander only works from these faces: {{ eligible_list | join(', ') }}. We'll build using the supported faces automatically.
{% endif %}
</div>
{% endif %}
{% endif %}
<div>
{% if tags and tags|length %}
<div class="muted" style="font-size:12px; margin-bottom:.35rem;">Pick up to three themes. Toggle AND/OR to control how themes combine.</div>

View file

@ -30,6 +30,10 @@
<input type="checkbox" name="prefer_owned" value="1" {% if prefer_owned %}checked{% endif %} onchange="this.form.requestSubmit();" />
Prefer owned cards (allow unowned fallback)
</label>
<label style="display:flex; align-items:center; gap:.35rem;" title="When enabled, modal DFC lands will replace a matching basic land as they are added so land counts stay level without manual trims.">
<input type="checkbox" name="swap_mdfc_basics" value="1" {% if swap_mdfc_basics %}checked{% endif %} onchange="this.form.requestSubmit();" />
Swap basics for MDFC lands
</label>
<a href="/owned" target="_blank" rel="noopener" class="btn">Manage Owned Library</a>
</form>
<div class="muted" style="font-size:12px; margin-top:-.25rem;">Tip: Locked cards are respected on reruns in Step 5.</div>

View file

@ -74,9 +74,10 @@
<p>Tags: {{ deck_theme_tags|default([])|join(', ') }}</p>
<div style="margin:.35rem 0; color: var(--muted); display:flex; gap:.5rem; align-items:center; flex-wrap:wrap;">
<span>Owned-only: <strong>{{ 'On' if owned_only else 'Off' }}</strong></span>
<div style="display:flex;align-items:center;gap:1rem;">
<div style="display:flex;align-items:center;gap:1rem;flex-wrap:wrap;">
<button type="button" hx-get="/build/step4" hx-target="#wizard" hx-swap="innerHTML" style="background:#374151; color:#e5e7eb; border:none; border-radius:6px; padding:.25rem .5rem; cursor:pointer; font-size:12px;" title="Change owned settings in Review">Edit in Review</button>
<div>Prefer-owned: <strong>{{ 'On' if prefer_owned else 'Off' }}</strong></div>
<div>MDFC swap: <strong>{{ 'On' if swap_mdfc_basics else 'Off' }}</strong></div>
</div>
<span style="margin-left:auto;"><a href="/owned" target="_blank" rel="noopener" class="btn">Manage Owned Library</a></span>
</div>

View file

@ -91,7 +91,7 @@
.commander-list { display:flex; flex-direction:column; gap:1rem; margin-top:.5rem; }
.commander-row { display:flex; gap:1rem; padding:1rem; border:1px solid var(--border); border-radius:14px; background:var(--panel); align-items:stretch; }
.commander-thumb { width:160px; flex:0 0 auto; }
.commander-thumb { width:160px; flex:0 0 auto; position:relative; }
.commander-thumb img { width:160px; height:auto; border-radius:10px; border:1px solid var(--border); background:#0b0d12; display:block; }
.commander-main { flex:1 1 auto; display:flex; flex-direction:column; gap:.6rem; min-width:0; }
.commander-header { display:flex; flex-wrap:wrap; align-items:center; gap:.5rem .75rem; }

View file

@ -1,6 +1,7 @@
{# Commander row partial fed by CommanderView entries #}
{% from "partials/_macros.html" import color_identity %}
{% set record = entry.record %}
{% set display_label = record.name if '//' in record.name else record.display_name %}
<article class="commander-row" data-commander-slug="{{ record.slug }}" data-hover-simple="true">
<div class="commander-thumb">
{% set small = record.image_small_url or record.image_normal_url %}
@ -12,12 +13,13 @@
loading="lazy"
decoding="async"
data-card-name="{{ record.display_name }}"
data-original-name="{{ record.name }}"
data-hover-simple="true"
/>
</div>
<div class="commander-main">
<div class="commander-header">
<h3 class="commander-name">{{ record.display_name }}</h3>
<h3 class="commander-name">{{ display_label }}</h3>
{{ color_identity(record.color_identity, record.is_colorless, entry.color_aria_label, entry.color_label) }}
</div>
<p class="commander-context muted">{{ record.type_line or 'Legendary Creature' }}</p>

View file

@ -12,6 +12,62 @@
<button class="btn" id="diag-theme-reset">Reset theme preference</button>
</div>
</div>
<div class="card" style="background: var(--panel); border:1px solid var(--border); border-radius:10px; padding:.75rem; margin-bottom:.75rem">
<h3 style="margin-top:0">Multi-face merge snapshot</h3>
<div class="muted" style="margin-bottom:.35rem">Pulls from <code>logs/dfc_merge_summary.json</code> to verify merge coverage.</div>
{% set colors = merge_summary.get('colors') if merge_summary else {} %}
{% if colors %}
<div class="muted" style="margin-bottom:.35rem">Last updated: {{ merge_summary.updated_at or 'unknown' }}</div>
<div style="overflow-x:auto">
<table style="width:100%; border-collapse:collapse; font-size:13px;">
<thead>
<tr style="border-bottom:1px solid var(--border); text-align:left;">
<th style="padding:.35rem .25rem;">Color</th>
<th style="padding:.35rem .25rem;">Groups merged</th>
<th style="padding:.35rem .25rem;">Faces dropped</th>
<th style="padding:.35rem .25rem;">Multi-face rows</th>
<th style="padding:.35rem .25rem;">Latest entries</th>
</tr>
</thead>
<tbody>
{% for color, payload in colors.items()|dictsort %}
<tr style="border-bottom:1px solid rgba(148,163,184,0.2);">
<td style="padding:.35rem .25rem; font-weight:600;">{{ color|title }}</td>
<td style="padding:.35rem .25rem;">{{ payload.group_count or 0 }}</td>
<td style="padding:.35rem .25rem;">{{ payload.faces_dropped or 0 }}</td>
<td style="padding:.35rem .25rem;">{{ payload.multi_face_rows or 0 }}</td>
<td style="padding:.35rem .25rem;">
{% set entries = payload.entries or [] %}
{% if entries %}
<details>
<summary style="cursor:pointer;">{{ entries|length }} recorded</summary>
<ul style="margin:.35rem 0 0 .75rem; padding:0; list-style:disc; max-height:180px; overflow:auto;">
{% for entry in entries %}
{% if loop.index0 < 5 %}
<li style="margin-bottom:.25rem;">
<strong>{{ entry.name }}</strong> — {{ entry.total_faces }} faces (dropped {{ entry.dropped_faces }})
</li>
{% elif loop.index0 == 5 %}
<li style="font-size:11px; opacity:.75;">… {{ entries|length - 5 }} more entries</li>
{% break %}
{% endif %}
{% endfor %}
</ul>
</details>
{% else %}
<span class="muted">No groups recorded</span>
{% endif %}
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
{% else %}
<div class="muted">No merge summary has been recorded. Run the tagger with multi-face merging enabled.</div>
{% endif %}
<div id="dfcMetrics" class="muted" style="margin-top:.5rem;">Loading MDFC metrics…</div>
</div>
<div class="card" style="background: var(--panel); border:1px solid var(--border); border-radius:10px; padding:.75rem; margin-bottom:.75rem">
<h3 style="margin-top:0">Performance (local)</h3>
<div class="muted" style="margin-bottom:.35rem">Scroll the Step 5 list; this panel shows a rough FPS estimate and virtualization renders.</div>
@ -193,6 +249,71 @@
.catch(function(){ tokenEl.textContent = 'Theme stats unavailable'; });
}
loadTokenStats();
var dfcMetricsEl = document.getElementById('dfcMetrics');
function renderDfcMetrics(payload){
if (!dfcMetricsEl) return;
try {
if (!payload || payload.ok !== true) {
dfcMetricsEl.textContent = 'MDFC metrics unavailable';
return;
}
var metrics = payload.metrics || {};
var html = '';
html += '<div><strong>Deck summaries observed:</strong> ' + String(metrics.total_builds || 0) + '</div>';
var withDfc = Number(metrics.builds_with_mdfc || 0);
var share = metrics.build_share != null ? Number(metrics.build_share) : null;
if (!Number.isNaN(share) && share !== null) {
share = (share * 100).toFixed(1);
} else {
share = null;
}
html += '<div><strong>With MDFCs:</strong> ' + String(withDfc);
if (share !== null) {
html += ' (' + share + '%)';
}
html += '</div>';
var totalLands = Number(metrics.total_mdfc_lands || 0);
var avg = metrics.avg_mdfc_lands != null ? Number(metrics.avg_mdfc_lands) : null;
html += '<div><strong>Total MDFC lands:</strong> ' + String(totalLands);
if (avg !== null && !Number.isNaN(avg)) {
html += ' (avg ' + avg.toFixed(2) + ')';
}
html += '</div>';
var top = metrics.top_cards || {};
var topKeys = Object.keys(top);
if (topKeys.length) {
var items = topKeys.slice(0, 5).map(function(name){
return name + ' (' + String(top[name]) + ')';
});
html += '<div style="font-size:11px;">Top MDFC sources: ' + items.join(', ') + '</div>';
}
var last = metrics.last_summary || {};
if (typeof last.dfc_lands !== 'undefined') {
html += '<div style="font-size:11px; margin-top:0.25rem;">Last summary: ' + String(last.dfc_lands || 0) + ' MDFC lands · total with MDFCs ' + String(last.with_dfc || 0) + '</div>';
}
if (metrics.last_updated) {
html += '<div style="font-size:11px;">Updated: ' + String(metrics.last_updated) + '</div>';
}
dfcMetricsEl.innerHTML = html;
} catch (_){
dfcMetricsEl.textContent = 'MDFC metrics unavailable';
}
}
function loadDfcMetrics(){
if (!dfcMetricsEl) return;
dfcMetricsEl.textContent = 'Loading MDFC metrics…';
fetch('/status/dfc_metrics', { cache: 'no-store' })
.then(function(resp){
if (resp.status === 404) {
dfcMetricsEl.textContent = 'Diagnostics disabled (metrics unavailable)';
return null;
}
return resp.json();
})
.then(function(data){ if (data) renderDfcMetrics(data); })
.catch(function(){ dfcMetricsEl.textContent = 'MDFC metrics unavailable'; });
}
loadDfcMetrics();
// Theme status and reset
try{
var tEl = document.getElementById('themeSummary');

View file

@ -29,6 +29,8 @@
.stack-card:hover { z-index: 999; transform: translateY(-2px); box-shadow: 0 10px 22px rgba(0,0,0,.6); }
.count-badge { position:absolute; top:6px; right:6px; background:rgba(17,24,39,.9); color:#e5e7eb; border:1px solid var(--border); border-radius:12px; font-size:12px; line-height:18px; height:18px; padding:0 6px; pointer-events:none; }
.owned-badge { position:absolute; top:6px; left:6px; background:rgba(17,24,39,.9); color:#e5e7eb; border:1px solid var(--border); border-radius:12px; font-size:12px; line-height:18px; height:18px; min-width:18px; padding:0 6px; text-align:center; pointer-events:none; z-index: 2; }
.dfc-thumb-badge { position:absolute; bottom:8px; left:6px; background:rgba(15,23,42,.92); border:1px solid #34d399; color:#bbf7d0; border-radius:12px; font-size:11px; line-height:18px; height:18px; padding:0 6px; pointer-events:none; }
.dfc-thumb-badge.counts { border-color:#60a5fa; color:#bfdbfe; }
.owned-flag { font-size:.95rem; opacity:.9; }
</style>
<div id="typeview-list" class="typeview">
@ -47,8 +49,11 @@
.list-row .count { font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace; font-variant-numeric: tabular-nums; font-feature-settings: 'tnum'; text-align:right; color:#94a3b8; }
.list-row .times { color:#94a3b8; text-align:center; font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace; }
.list-row .name { display:inline-block; padding: 2px 4px; border-radius: 6px; white-space: nowrap; overflow: hidden; text-overflow: ellipsis; }
.list-row .flip-slot { min-width:2.4em; display:flex; justify-content:center; align-items:center; }
.list-row .flip-slot { min-width:2.4em; display:flex; justify-content:flex-start; align-items:center; }
.list-row .owned-flag { width: 1.6em; min-width: 1.6em; text-align:center; display:inline-block; }
.dfc-land-chip { display:inline-flex; align-items:center; gap:.25rem; padding:2px 6px; border-radius:999px; font-size:11px; font-weight:600; background:#0f172a; border:1px solid #334155; color:#e5e7eb; line-height:1; }
.dfc-land-chip.extra { border-color:#34d399; color:#a7f3d0; }
.dfc-land-chip.counts { border-color:#60a5fa; color:#bfdbfe; }
</style>
<div class="list-grid">
{% for c in clist %}
@ -69,7 +74,11 @@
<span class="count">{{ cnt }}</span>
<span class="times">x</span>
<span class="name dfc-anchor" title="{{ c.name }}" data-card-name="{{ c.name }}" data-count="{{ cnt }}" data-role="{{ c.role }}" data-tags="{{ (c.tags|map('trim')|join(', ')) if c.tags else '' }}"{% if overlaps %} data-overlaps="{{ overlaps|join(', ') }}"{% endif %}>{{ c.name }}</span>
<span class="flip-slot" aria-hidden="true"></span>
<span class="flip-slot" aria-hidden="true">
{% if c.dfc_land %}
<span class="dfc-land-chip {% if c.dfc_adds_extra_land %}extra{% else %}counts{% endif %}" title="{{ c.dfc_note or 'Modal double-faced land' }}">DFC land{% if c.dfc_adds_extra_land %} +1{% endif %}</span>
{% endif %}
</span>
<span class="owned-flag" title="{{ 'Owned' if owned else 'Not owned' }}" aria-label="{{ 'Owned' if owned else 'Not owned' }}">{% if owned %}✔{% else %}✖{% endif %}</span>
</div>
{% endfor %}
@ -106,6 +115,9 @@
sizes="(max-width: 1200px) 160px, 240px" />
<div class="count-badge">{{ cnt }}x</div>
<div class="owned-badge" title="{{ 'Owned' if owned else 'Not owned' }}" aria-label="{{ 'Owned' if owned else 'Not owned' }}">{% if owned %}✔{% else %}✖{% endif %}</div>
{% if c.dfc_land %}
<div class="dfc-thumb-badge {% if c.dfc_adds_extra_land %}extra{% else %}counts{% endif %}" title="{{ c.dfc_note or 'Modal double-faced land' }}">DFC{% if c.dfc_adds_extra_land %}+1{% endif %}</div>
{% endif %}
</div>
{% endfor %}
</div>
@ -122,6 +134,60 @@
<!-- Deck Summary initializer script moved below markup for proper element availability -->
<!-- Land Summary -->
{% set land = summary.land_summary if summary else None %}
{% if land %}
<section style="margin-top:1rem;">
<h5>Land Summary</h5>
<div class="muted" style="font-weight:600; margin-bottom:.35rem;">
{{ land.headline or ('Lands: ' ~ (land.traditional or 0)) }}
</div>
<div style="display:flex; flex-wrap:wrap; gap:.75rem; align-items:flex-start;">
<div class="muted">Traditional land slots: <strong>{{ land.traditional or 0 }}</strong></div>
<div class="muted">MDFC land additions: <strong>{{ land.dfc_lands or 0 }}</strong></div>
<div class="muted">Total with MDFCs: <strong>{{ land.with_dfc or land.traditional or 0 }}</strong></div>
</div>
{% if land.dfc_cards %}
<details style="margin-top:.5rem;">
<summary>MDFC mana sources ({{ land.dfc_cards|length }})</summary>
<ul style="list-style:none; padding:0; margin:.35rem 0 0; display:grid; gap:.35rem;">
{% for card in land.dfc_cards %}
{% set extra = card.adds_extra_land or card.counts_as_extra %}
{% set colors = card.colors or [] %}
<li class="muted" style="display:flex; gap:.5rem; flex-wrap:wrap; align-items:flex-start;">
<span class="chip"><span class="dot" style="background:#10b981;"></span> {{ card.name }} ×{{ card.count or 1 }}</span>
<span>Colors: {{ colors|join(', ') if colors else '' }}</span>
{% if extra %}
<span class="chip" style="background:#0f172a; border-color:#34d399; color:#a7f3d0;">{{ card.note or 'Adds extra land slot' }}</span>
{% else %}
<span class="chip" style="background:#111827; border-color:#60a5fa; color:#bfdbfe;">{{ card.note or 'Counts as land slot' }}</span>
{% endif %}
{% if card.faces %}
<ul style="list-style:none; padding:0; margin:.2rem 0 0; display:grid; gap:.15rem; flex:1 0 100%;">
{% for face in card.faces %}
{% set face_name = face.get('face') or face.get('faceName') or 'Face' %}
{% set face_type = face.get('type') or '' %}
{% set mana_cost = face.get('mana_cost') %}
{% set mana_value = face.get('mana_value') %}
{% set produces = face.get('produces_mana') %}
<li style="font-size:0.85rem; color:#e5e7eb; opacity:.85;">
<span>{{ face_name }}</span>
<span>— {{ face_type }}</span>
{% if mana_cost %}<span>• Mana Cost: {{ mana_cost }}</span>{% endif %}
{% if mana_value is not none %}<span>• MV: {{ mana_value }}</span>{% endif %}
{% if produces %}<span>• Produces mana</span>{% endif %}
</li>
{% endfor %}
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
</details>
{% endif %}
</section>
{% endif %}
<!-- Mana Overview Row: Pips • Sources • Curve -->
<section style="margin-top:1rem;">
<h5>Mana Overview</h5>
@ -144,7 +210,11 @@
{% set c_cards = (pc[color] if pc and (color in pc) else []) %}
{% set parts = [] %}
{% for c in c_cards %}
{% set _ = parts.append(c.name ~ ((" ×" ~ c.count) if c.count and c.count>1 else '')) %}
{% set label = c.name ~ ((" ×" ~ c.count) if c.count and c.count>1 else '') %}
{% if c.dfc %}
{% set label = label ~ ' (DFC)' %}
{% endif %}
{% set _ = parts.append(label) %}
{% endfor %}
{% set cards_line = parts|join(' • ') %}
{% set pct_f = (pd.weights[color] * 100) if pd.weights and color in pd.weights else 0 %}

25
docs/authoring/cards.md Normal file
View file

@ -0,0 +1,25 @@
# Card Authoring Guide
This guide captures the conventions used by the deckbuilder when new cards are added to the CSV inputs. Always validate your edits by running the fast tagging tests or a local build before committing changes.
## Modal double-faced & transform cards
The tagging and reporting pipeline expects one row per face for any multi-faced card (modal double-faced, transform, split, or adventure). Use the checklist below when adding or updating these entries:
1. **Canonical name** — Keep the `name` column identical for every face (e.g., `Valakut Awakening // Valakut Stoneforge`). Individual faces should instead set `face_name` when available; the merger preserves front-face copy for downstream consumers.
2. **Layout & side** — Populate `layout` with the value emitted by Scryfall (`modal_dfc`, `transform`, `split`, `adventure`, etc.) and include a `side` column (`a`, `b`, …). The merger uses `side` ordering when reconstructing per-face metadata.
3. **Mana details** — Supply `mana_cost`, `mana_value`, and `produces_mana` for every face. The per-face land snapshot and deck summary badges rely on these fields to surface the “DFC land” chip and annotated mana production.
4. **Type line accuracy** — Ensure `type_line` includes `Land` for any land faces. The builder counts a card toward land totals when at least one face includes `Land`.
5. **Tags & roles** — Tag every face with the appropriate `themeTags`, `roleTags`, and `card_tags`. The merge stage unions these sets so the finished card retains all relevant metadata.
6. **Commander eligibility** — Only the primary (`side == 'a'`) face is considered for commander legality. If you add a new MDFC commander, double-check that the front face satisfies the Commander rules text; otherwise the record is filtered during catalog refresh.
7. **Cross-check exports** — After the card is added, run a local build and confirm the deck exports include the new `DFCNote` column entry for the card. The annotation summarizes each land face so offline reviewers see the same guidance as the web UI.
### Diagnostics snapshot (optional)
When validating a large batch of MDFCs, enable the snapshot helper to inspect the merged faces:
- Set `DFC_PER_FACE_SNAPSHOT=1` (and optionally `DFC_PER_FACE_SNAPSHOT_PATH`) before running the tagging pipeline.
- Disable parallel tagging (`WEB_TAG_PARALLEL=0`) while the snapshot is active; the helper only writes output during sequential runs.
- Once tagging completes, review `logs/dfc_per_face_snapshot.json` for the card you added to verify mana fields, `produces_mana`, and land detection flags.
Following these guidelines keeps the deck summary badges, exporter annotations, and diagnostics snapshots in sync for every new double-faced card.

View file

@ -35,17 +35,16 @@ Additional columns are preserved but ignored by the browser; feel free to keep u
## Recommended refresh workflow
1. Ensure dependencies are installed: `pip install -r requirements.txt`.
2. Regenerate the commander CSV using the setup module:
2. Regenerate the commander catalog with the MDFC-aware helper (multi-face merge always on):
```powershell
python -c "from file_setup.setup import regenerate_csvs_all; regenerate_csvs_all()"
python -m code.scripts.refresh_commander_catalog
```
This downloads the latest MTGJSON card dump (if needed), reapplies commander eligibility rules, and rewrites `commander_cards.csv`.
3. (Optional) If you only need a fresh commander list and already have up-to-date `cards.csv`, run:
```powershell
python -c "from file_setup.setup import determine_commanders; determine_commanders()"
```
4. Restart the web server (or your desktop app) so the cache reloads the new file.
5. Validate with the targeted test:
- Pass `--compat-snapshot` when you need both `csv_files/commander_cards.csv` and `csv_files/compat_faces/commander_cards_unmerged.csv` so downstream consumers can diff the historical row-per-face layout.
- The legacy `--mode` argument is deprecated; it no longer disables the merge but still maps `--mode compat` to `--compat-snapshot` for older automation. Use `--skip-setup` if `determine_commanders()` has already been run and you simply need to reapply tagging.
- When running the web service during staging, set `DFC_COMPAT_SNAPSHOT=1` if you need the compatibility snapshot written on each rebuild. The merge itself no longer requires a feature flag.
- Use the staging QA checklist (`docs/qa/mdfc_staging_checklist.md`) to validate commander flows and downstream consumers before promoting the flag in production.
3. Restart the web server (or your desktop app) so the cache reloads the new file.
4. Validate with the targeted test:
```powershell
python -m pytest -q code/tests/test_commander_catalog_loader.py
```

View file

@ -0,0 +1,63 @@
# MDFC Staging QA Checklist
Use this checklist when validating the MDFC merge in staging. The merge now runs unconditionally; set `DFC_COMPAT_SNAPSHOT=1` when you also need the legacy unmerged snapshots for downstream validation.
_Last updated: 2025-10-02_
## Prerequisites
- Staging environment (Docker Compose or infrastructure equivalent) can override environment variables for the web service.
- Latest code synced with the MDFC merge helper (`code/scripts/refresh_commander_catalog.py`).
- Virtualenv or container image contains current project dependencies (`pip install -r requirements.txt`).
## Configuration Steps
1. Set the staging web service environment as needed:
- `DFC_COMPAT_SNAPSHOT=1` when downstream teams still require the compatibility snapshot.
- Optional diagnostics helpers: `SHOW_DIAGNOSTICS=1`, `SHOW_LOGS=1` (helps confirm telemetry output during smoke testing).
2. Inside the staging container (or server), regenerate commander data:
```powershell
python -m code.scripts.refresh_commander_catalog
```
- Verify the script reports both the merged output (`csv_files/commander_cards.csv`) and the compatibility snapshot (`csv_files/compat_faces/commander_cards_unmerged.csv`).
3. Restart the web service so the refreshed files (and optional compatibility snapshot setting) take effect.
## Smoke QA
| Area | Steps | Pass Criteria |
| --- | --- | --- |
| Commander Browser | Load `/commanders`, search for a known MDFC commander (e.g., "Elmar, Ulvenwald Informant"), flip faces, paginate results. | No duplicate rows per face, flip control works, pagination remains responsive. |
| Deck Builder | Run a New Deck build with a commander that adds MDFC lands (e.g., "Atraxa, Grand Unifier" with MDFC swap option). | Deck summary shows "Lands: X (Y with DFC)" copy, MDFC notes render, CLI summary matches web copy (check download/export). |
| Commander Exclusions | Attempt to search for a commander that should be excluded because only the back face is legal (e.g., "Withengar Unbound"). | UI surfaces exclusion guidance; the commander is not selectable. |
| Diagnostics | Open `/diagnostics` with `SHOW_DIAGNOSTICS=1`. Confirm MDFC telemetry panel shows merged counts. | `dfc_merge_summary` card present with non-zero merged totals; land telemetry includes MDFC contribution counts. |
| Logs | Tail application logs via `/logs` or container logs during a build. | No errors related to tag merging or commander loading. |
## Automated Checks
Run the targeted test suite to ensure MDFC regressions are caught:
```powershell
c:/Users/Matt/mtg_python/mtg_python_deckbuilder/.venv/Scripts/python.exe -m pytest -q ^
code/tests/test_land_summary_totals.py ^
code/tests/test_commander_primary_face_filter.py ^
code/tests/test_commander_exclusion_warnings.py
```
- All tests should pass. Investigate any failures before promoting the flag.
## Downstream Sign-off
1. Provide consumers with:
- Merged file: `csv_files/commander_cards.csv`
- Compatibility snapshot: `csv_files/compat_faces/commander_cards_unmerged.csv`
2. Share expected merge metrics (`logs/dfc_merge_summary.json`) to help validate MDFC counts.
3. Collect acknowledgements that downstream pipelines work with the merged file (or have cut over) before retiring the compatibility flag.
## Rollback Plan
- Disable `DFC_COMPAT_SNAPSHOT` (or leave it unset) and rerun `python -m code.scripts.refresh_commander_catalog` if compatibility snapshots are no longer required.
- Revert to the previous committed commander CSV if needed (`git checkout -- csv_files/commander_cards.csv`).
- Document the issue in the roadmap and schedule the fix before reattempting the staging rollout.
## Latest Run (2025-10-02)
- Environment: staging compose updated (temporarily set `ENABLE_DFC_MERGE=compat`, now retired) and reconfigured with optional `DFC_COMPAT_SNAPSHOT=1` for compatibility checks.
- Scripts executed:
- `python -m code.scripts.refresh_commander_catalog --compat-snapshot`
- `python -m code.scripts.preview_dfc_catalog_diff --compat-snapshot --output logs/dfc_catalog_diff.json`
- Automated tests passed:
- `code/tests/test_land_summary_totals.py`
- `code/tests/test_commander_primary_face_filter.py`
- `code/tests/test_commander_exclusion_warnings.py`
- Downstream sign-off: `logs/dfc_catalog_diff.json` shared with catalog consumers alongside `csv_files/compat_faces/commander_cards_unmerged.csv`; acknowledgements recorded in `docs/releases/dfc_merge_rollout.md`.

View file

@ -0,0 +1,31 @@
# MDFC Merge Rollout (2025-10-02)
## Summary
- Staging environment refreshed with the MDFC merge permanently enabled; compatibility snapshot retained via `DFC_COMPAT_SNAPSHOT=1` during validation.
- Commander catalog rebuilt with `python -m code.scripts.refresh_commander_catalog --compat-snapshot`, generating both the merged output and `csv_files/compat_faces/commander_cards_unmerged.csv` for downstream comparison.
- Diff artifact `logs/dfc_catalog_diff.json` captured via `python -m code.scripts.preview_dfc_catalog_diff --compat-snapshot --output logs/dfc_catalog_diff.json` and shared with downstream consumers.
- `ENABLE_DFC_MERGE` guard removed across the codebase; documentation updated to reflect the always-on merge and optional compatibility snapshot workflow.
## QA Artifacts
| Artifact | Description |
| --- | --- |
| `docs/qa/mdfc_staging_checklist.md` | Latest run log documents the staging enablement procedure and verification steps. |
| `logs/dfc_catalog_diff.json` | JSON diff summarising merged vs. unmerged commander/catalog rows for parity review. |
| `csv_files/commander_cards.csv` | Merged commander catalog generated after guard removal. |
| `csv_files/compat_faces/commander_cards_unmerged.csv` | Legacy snapshot retained for downstream validation during the final review window. |
## Automated Verification
| Check | Command | Result |
| --- | --- | --- |
| MDFC land accounting | `python -m pytest -q code/tests/test_land_summary_totals.py` | ✅ Passed |
| Commander primary-face filter | `python -m pytest -q code/tests/test_commander_primary_face_filter.py` | ✅ Passed |
| Commander exclusion warnings | `python -m pytest -q code/tests/test_commander_exclusion_warnings.py` | ✅ Passed |
## Downstream Sign-off
| Consumer / Surface | Validation | Status |
| --- | --- | --- |
| Web UI (builder + diagnostics) | MDFC staging checklist smoke QA | ✅ Complete |
| CLI / Headless workflows | Targeted pytest suite confirmations (see above) | ✅ Complete |
| Data exports & analytics | `logs/dfc_catalog_diff.json` review against `commander_cards_unmerged.csv` | ✅ Complete |
All downstream teams confirmed parity with the merged catalog and agreed to proceed without the `ENABLE_DFC_MERGE` guard. Compatibility snapshots remain available via `DFC_COMPAT_SNAPSHOT=1` for any follow-up spot checks.

View file

@ -40,15 +40,15 @@ seed_defaults() {
seed_defaults
# Always operate from the code directory for imports to work
cd /app/code || exit 1
# Ensure we're at repo root so the `code` package resolves correctly
cd /app || exit 1
# Select mode: default to Web UI
MODE="${APP_MODE:-web}"
if [ "$MODE" = "cli" ]; then
# Run the CLI (interactive menu; use DECK_MODE=headless for non-interactive)
exec python main.py
# Run the CLI (interactive menu; use DECK_MODE=headless for non-interactive)
exec python -m code.main
fi
# Web UI (FastAPI via uvicorn)
@ -56,4 +56,4 @@ HOST="${HOST:-0.0.0.0}"
PORT="${PORT:-8080}"
WORKERS="${WORKERS:-1}"
exec uvicorn web.app:app --host "$HOST" --port "$PORT" --workers "$WORKERS"
exec uvicorn code.web.app:app --host "$HOST" --port "$PORT" --workers "$WORKERS"