Add Python web scraper for NJC travel rates with currency extraction

- Implemented Python scraper using BeautifulSoup and pandas to automatically collect travel rates from official NJC website
- Added currency extraction from table titles (supports EUR, USD, AUD, CAD, ARS, etc.)
- Added country extraction from table titles for international rates
- Flatten pandas MultiIndex columns for cleaner data structure
- Default to CAD for domestic Canadian sources (accommodations and domestic tables)
- Created SQLite database schema (raw_tables, rate_entries, exchange_rates, accommodations)
- Successfully scraped 92 tables with 17,205 rate entries covering 25 international cities
- Added migration script to convert scraped data to Node.js database format
- Updated .gitignore for Python files (.venv/, __pycache__, *.pyc, *.sqlite3)
- Fixed city validation and currency conversion in main app
- Added comprehensive debug and verification scripts

This replaces manual JSON maintenance with automated data collection from official government source.
This commit is contained in:
2026-01-13 09:21:43 -05:00
commit 15094ac94b
84 changed files with 19859 additions and 0 deletions

View File

@@ -0,0 +1,35 @@
import sqlite3
conn = sqlite3.connect('data/travel_rates_scraped.sqlite3')
cursor = conn.cursor()
print("Argentina entries with breakfast:")
cursor.execute("""
SELECT country, city, rate_type, rate_amount, currency
FROM rate_entries
WHERE country LIKE '%Argentina%' AND rate_type LIKE '%breakfast%'
LIMIT 5
""")
for row in cursor.fetchall():
print(f" {row}")
print("\nAlbania entries with breakfast:")
cursor.execute("""
SELECT country, city, rate_type, rate_amount, currency
FROM rate_entries
WHERE country LIKE '%Albania%' AND rate_type LIKE '%breakfast%'
LIMIT 5
""")
for row in cursor.fetchall():
print(f" {row}")
print("\nAll Argentina city entries:")
cursor.execute("""
SELECT DISTINCT city, currency
FROM rate_entries
WHERE country LIKE '%Argentina%'
""")
for row in cursor.fetchall():
print(f" {row[0]}: {row[1]}")
conn.close()