Note: This project was built in 24 hours for the Sheridan Datathon in collaboration with @YeehawMcfly and @Abhiroop-Tech.
Each year, an estimated 20,000 whales are killed globally by collisions with vessels, making these collisions the leading cause of death for large whale species. This ongoing threat endangers the survival of these “ecosystem engineers”, posing a detrimental effect throughout the marine ecosystem.
To address the issue, we created the SafeShip initiative, a data-driven platform that not only monitors whale habitats, but also evaluates the risk of routes in real time and gives instantaneous rerouting suggestions. This initiative aims to save lives and protect vital marine biodiversity, while maintaining the efficiency of global shipping operations.
SafeShip is a full-stack maritime safety platform designed to prevent collisions between commercial vessels and large whale species. It integrates real-time AIS vessel tracking, machine learning risk prediction, and LLM-generated safety briefings to provide actionable insights for bridge crews.
- Search vessels by name (e.g., "EVER GIVEN")
- View ships near your location or in bounding box
- Query ships in specific ports
- Live AIS position updates
- Predicts whale presence risk: HIGH / MEDIUM / LOW
- Trained on OBIS-SEAMAP whale sighting dataset
- Features: latitude, longitude, month (seasonal patterns)
- ~85-90% accuracy
- Visual Analysis: AI sees the map screenshot (html2canvas)
- Context-Aware: Integrates vessel data, track history, whale risk
- Natural Language: Bridge crew briefings with actionable recommendations
- Markdown Rendering: Beautiful formatted output
To watch ShipSafe in action, simply click on this video:
The system utilizes a microservices architecture with a React frontend, Node.js backend gateway, and a Python/Flask machine learning service.
┌─────────────────────────────────────────────┐
│ Frontend (React + Leaflet + Gemini UI) │ Port 5173
│ - Interactive ship map │
│ - Real-time vessel tracking │
│ - AI safety briefings │
└──────────────────┬──────────────────────────┘
│
┌─────────┴─────────┐
▼ ▼
┌──────────────────┐ ┌──────────────────┐
│ Backend API │ │ ML Service │
│ (Node/TS) │ │ (Python/Flask) │
│ Port 5001 │ │ Port 5002 │
│ │ │ │
│ • Ship tracking │ │ • Whale risk │
│ • Gemini AI │ │ prediction │
│ • Route calc │ │ • GBM classifier │
└──────────────────┘ └──────────────────┘
│ │
▼ ▼
┌──────────────────┐ ┌──────────────────┐
│ MyShipTracking │ │ OBIS Whale Data │
│ AIS API │ │ (trained model) │
└──────────────────┘ └──────────────────┘
- Frontend: React, TypeScript, Leaflet, Vite, Marked
- Backend: Node.js, Express, TypeScript
- ML: Python, Flask, scikit-learn, pandas, numpy
- Data: MyShipTracking API (AIS), OBIS-SEAMAP (whale sightings)
- Node.js 18+
- Python 3.8+
- MyShipTracking API key (Get one here)
- Google Gemini API key (Get one here)
Run the automated setup script to install all dependencies:
.\setup.batThis will:
- Install Node.js packages for backend & frontend
- Install Python dependencies for ML service
- Train the whale risk detection model
- Navigate to
backendfolder - Copy
.env.templateto.env:cd backend copy .env.template .env
- Edit
.envand add your API keys:MST_API_KEY=your_myshiptracking_key_here GEMINI_API_KEY=your_gemini_api_key_here
Run the startup script (uses fixed Python path for ML service):
.\start_fixed.ps1This will launch:
- ML Service on
http://localhost:5002(Python Flask) - Backend API on
http://localhost:5001(Node.js/Express) - Frontend on
http://localhost:5173(React + Vite)
Navigate to http://localhost:5173 in your browser.
Ship Tracking:
GET /api/vessels/search/:name- Search vesselsGET /api/vessels/status/:mmsi- Current position (extended AIS data)GET /api/vessels/track/:mmsi?days=1- Historical trackGET /api/vessels/status/:mmsi/whale-risk- Position + risk
AI Insights:
POST /api/gemini/insight- Generate Gemini safety briefing{ "ship": { "lat": 35.2, "lon": 139.5, "mmsi": "311918000", ... }, "track": [...], "question": "What should the bridge team know?", "mapSnapshot": { "dataUrl": "data:image/png;base64,...", "mimeType": "image/png" } }
Whale Risk:
POST /api/whale-risk- Get risk for coordinates
POST /api/whale-risk- Risk prediction{ "latitude": 40.7, "longitude": -74.0, "month": 11 }
Algorithm: Gradient Boosting Classifier
Features: Latitude, Longitude, Month
Training Data: OBIS-SEAMAP whale sightings + synthetic migration patterns
Risk Thresholds:
- 🔴 HIGH (>60%): Reduce speed to <10 knots, post whale watch
- 🟡 MEDIUM (30-60%): Exercise caution, brief crew
- 🟢 LOW (<30%): Standard protocols
Model Files:
ml/whale_risk_model.pkl- Trained classifierml/train_whale_model.py- Training scriptml/whale_predictor.py- Standalone predictor
PORT=5001
MST_API_KEY=your_myshiptracking_api_key
MST_CACHE_TTL_MS=30000
MST_DEFAULT_MINUTES_BACK=60
GEMINI_API_KEY=your_google_gemini_api_key
GEMINI_MODEL=gemini-2.0-flash-liteNo configuration needed (port 5002 hardcoded).
/
├── backend/ # Node.js/TypeScript API
│ ├── src/
│ │ ├── classes/server.ts # Express routes
│ │ ├── services/
│ │ │ ├── mstClient.ts # MyShipTracking client
│ │ │ ├── whaleRiskService.ts
│ │ │ └── geminiService.ts # Gemini AI integration
│ │ └── index.ts
│ ├── .env.template
│ └── package.json
│
├── frontend/ # React + Vite
│ ├── src/
│ │ ├── components/ShipMap.tsx # Main map UI
│ │ ├── App.css # Styling
│ │ └── main.tsx
│ └── package.json
│
├── ml/ # Python Flask ML service
│ ├── api.py # Flask server
│ ├── train_whale_model.py # Model training
│ ├── whale_predictor.py # Inference script
│ ├── whale_risk_model.pkl # Trained model
│ └── requirements.txt
│
├── setup.bat # Dependency installer
├── start_fixed.ps1 # Startup script (all services)
└── README.md
cd ml
python train_whale_model.pyBackend:
cd backend
npm startML Service:
cd ml
python api.pyFrontend:
cd frontend
npm run devThis project is licensed under the MIT License. See LICENSE for more details.
