Hack the Border 2025 — Adversarial Thinking & Cybersecurity with AI
Theme
Defend the Future — Adversarial Thinking & Cybersecurity with AI — build or educate for people-first defenses against modern threats (phishing, social engineering, deepfakes/voice clones, data mishandling) using AI as a tool for defense and a lens to reason about risk. Projects should be transparent, safe, useful, and privacy-respecting, teaching communities to anticipate, test, and counter attacks with clear, adoptable practices.
Project (Optional)
Over ~6 weeks, create a focused prototype (web/app/bot) or an educational piece (video/carousel) that solves a real local need; submit on Devpost one week before the event with a mandatory video.
Scoring
Final score = 50% Project (judged during the week before) + 50% Villages.
Villages (Day 2)
Earn points by participating in CTF, Secure Coding, and Other Activities—beginner-friendly; partial progress still counts.
Event Flow
Day 1 = Registration + Industry Panel Night • Day 2 = Villages, gallery-style showcase (we’ll play your videos), lightning demos, and awards.
Track 1 — Adversarial Thinking
Help people spot risks and choose simple defenses—think like attacker and defender (ethically).
Example deliverables:
-
Scenario Generator (attack/defend): Short caselets (AI phish, voice clone, social engineering, data mishandling) with reflection questions + facilitator notes. MVP: 5 scenarios + PDF export. Stretch: deck export and difficulty levels.
-
Prompt Jailbreak Lab (beginner-safe): Demo a risky prompt → show the safer rewrite and guardrails; explain why it works. MVP: 3 jailbreaks + fixes. Stretch: side-by-side output viewer + “copy safe prompt.”
-
Attack-Tree → Blue-Plan Builder: Sketch threats for a chosen context; tool suggests countermeasures and a 15-minute tabletop drill sheet. MVP: drag-and-drop nodes + printable drill. Stretch: library of common threats/controls.
Notes: Keep ethics front-and-center; avoid real sensitive data; include a lightweight README (what/how/limits) and the video showing threat → defense → how to use it.
Track 2 — Community Cybersecurity
Make everyday digital life safer for people and small orgs—simple, respectful, adoptable.
Example deliverables:
-
Cyber Hygiene Coach (web app): Quick self-audit → prioritized fixes (passwords/MFA/updates/backups) with how-to guides. MVP: checklist + links. Stretch: reminders, progress tracking, printable report.
-
Phishing Drill-in-a-Box: Click-through simulations + debrief pages and talk tracks for clubs/nonprofits (no real PII). MVP: 3 scenarios + debrief notes. Stretch: results dashboard, auto-generated feedback email.
-
Public Computer/Kiosk Hardening Wizard: “Before you log off” checklist (clear cache/downloads/clipboard) + admin setup tips; printable station placards. MVP: single page + PDF. Stretch: timer and end-of-session prompts.
Optional education asset for any of the above: 2–5 min video or one-pager “Top 10 fixes in 10 minutes” (EN/ES).
Track 3 — Prompt Engineering
Build reusable, responsible prompting foundations for everyone.
Example deliverables:
-
Prompt Safety Checker (web app): Paste a prompt → flag privacy/bias/policy risks, show safer rewrites. MVP: rule-based checks + red/yellow/green + 3 rewrites. Stretch: domain presets; revision log.
-
Prompt Pattern Library: Taggable gallery of proven patterns (Tutor, Critic, SQLizer), with “when to use,” pitfalls, copy-to-clipboard. Stretch: side-by-side “test this pattern” sandbox.
-
Prompt Robustness Tester: Auto-perturb prompts (typos/synonyms/order) and show stability metrics + suggested guardrails.
-
Persona Prompt Composer: Wizard to generate role prompts (translator, safety reviewer) with tone, constraints, refuse rules, and a disclosure line. Stretch: save/share presets.
Optional education asset for any of the above: 1–3 page “Prompt Do/Don’t” cards or a 2–5 min micro-lesson.
Track 4 — AI Study & Helper Bots
Design helpful, privacy-aware assistants for learning and everyday community services.
Example deliverables:
Learning & study
-
Syllabus-to-Study-Plan: Upload a syllabus → weekly plan, practice prompts, spaced-repetition reminders. Stretch: calendar sync, adaptive difficulty.
-
“Explain Like I’m New Here” Tutor: Intake prior knowledge → stepwise explanation → mini-quiz with feedback; bilingual toggle.
-
Study Group Mode: Shared room where the bot rotates prompts, keeps time, assigns roles (explainer/checker).
Community & services
-
Small-Business Policy Buddy: Generates plain-language policies (passwords/updates/backups) + rollout plan and to-do list.
-
Event Safety & Comms Assistant: Drafts safety briefings, signage text, lost-and-found workflow, emergency SMS templates; PDF export.
-
Library/Kiosk Guardian: End-of-session checklist (clear cache/downloads/clipboard) + timers/reminders; optional admin analytics.
Notes: For any bot, include a lightweight README (what/how/limits), a privacy note (no real PII), and a mandatory video.
Deliverable guidance (applies to all tracks)
-
Build (prototype): working slice (web/app/bot/extension) + README (what it does, how to try, limits, what’s next).
-
Educate (media): 2–5 min video or 5–8 tile carousel + sources + “do this next” checklist.
-
Mandatory video: overview for all submissions (used in showcase).
Requirements
What to Submit
One Devpost submission per team (due one week before the event):
- Team name & members (4)
-
Project title
-
Short description (3–5 sentences)
-
Demo link or screenshots (GitHub, live link, video, or slides)
-
README / Learning resources (how to try; who it helps; limitations)
-
Mandatory video — problem, who benefits, brief demo/storyboard, local fit, what’s next
Paths
-
Build (Prototype): live link or local demo instructions plus the video
-
Educate (Media): media asset + source list plus the video
Video Requirements
-
Show: problem → what it does → community fit → what’s next
-
Substance over cinematics; clarity is what’s judged
Prizes
First Place
First Place Prize
Second Place
Second Place Prize
Third Place
Third Place Prize
Devpost Achievements
Submitting to this hackathon could earn you:
Judges
Anonymous
Anonymous
Anonymous
Anonymous
Judging Criteria
-
Impact & Relevance
Rates how well the project addresses a real, local need and benefits a clear audience. Strong entries show community context, credible use cases, and a path to adoption. -
Clarity & UX
Evaluates how easy it is to understand and use. We look for plain language, clean flow, and inclusive design (accessibility and, where helpful, bilingual support). -
Originality
Measures freshness of the idea or framing and constructive reuse of tools. Credit prior work, but add a clear value-add or novel angle. -
Functionality / Instructional Quality
For Build entries: does something run and demonstrate the core workflow with basic documentation and guardrails? For Educate entries: is the content accurate, well-sourced, and effective at teaching a practical takeaway? -
Presentation
Assesses storytelling and time discipline. Great demos make the problem, solution, impact, and next steps obvious—without going over time. -
Effort & Teamwork
Considers scope realism, visible collaboration, and iteration. Show roles, progress, and what you learned or changed along the way.
Questions? Email the hackathon manager
Tell your friends
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.