A global logistics company is struggling to track shipments in real time, predict delivery delays, and optimize warehouse routing across multiple distribution centers.
Your team's job is to build an AI-augmented delivery database system that ingests shipment records or synthetic logistics data to create live tracking profiles and generate actionable predictions for operations managers.
You pick the domain — everyday packages, semiconductor supply chains… or even beanie babies. If it ships, you can track it.
A regional hospital network is struggling to allocate ICU beds, predict readmissions, and manage appointment no-shows.
Your team's job is to build a data-driven platform that ingests structured health records or synthetic patient data to create ‘digital twin’ profiles and generate actionable predictions for clinical decision-makers.
The NYC Department of Emergency Management needs to act before small signals turn into citywide crises. Weather, hospital capacity, and transit data all exist — but they are disconnected and difficult to analyze together.
Your team's job is to build a cloud-based AI decision-support system that continuously pulls data from public APIs and stores it in a structured database organized by neighborhood and time.
We recommend structuring your project around four components — but build it however works for your team.
Get data into your system. Some approaches:
Store your data in a structured way. A good schema usually covers:
Wire up ingestion, predictions, and query results. Common options: FastAPI, Next.js API routes, Express, or Supabase Edge Functions.
A simple interface to query your system and display results. Streamlit, Next.js, and Retool are all great starting points.
Pick whatever you're comfortable with — these are starting points, not rules.
Is the system well structured?
Does the ingestion pipeline work?
Does the system generate useful predictions?
Can the system answer operational questions?
Is the system understandable and usable?