COMPASS is an intelligent, memory-driven platform that automates career placement programmes end-to-end — so every human involved can focus entirely on work only humans can do.
Explore the productWATCH: COMPASS IN 3 MINUTES
Every year NTUC LHub runs programmes that change careers. The people running them are committed. The problem is the system they work within makes every part of their job harder than it needs to be.
These are not four separate problems. They are one problem expressed four ways: a system that generates experience without capturing it, accumulates data without learning from it, and places humans in the role of connective tissue that the software should be providing.
Salespersons manually assess suitability with no outcome data. Inconsistency baked in from day one.
Post-training follow-up by email. No engagement, no response. Data collection fails at the moment it matters most.
CVs collected manually. Employer matching done by humans with no systematic memory of what worked before.
Government reports compiled manually from Excel. Errors, delays, and zero time to intervene before submission.
Manages 200+ active trainees daily across multiple programmes. Spends hours switching between CRM, SharePoint and email.
A daily brief of decisions that need a human. Everything else — follow-ups, reminders, status updates — runs autonomously. Her cognitive load drops by 80%.
First touchpoint for every trainee. Currently the inconsistency bottleneck — assessing suitability manually with no system support.
A decision brief for every application: outcome-weighted recommendations with full reasoning drawn from thousands of prior trajectories. His job becomes approval and relationship.
35-year-old retail manager transitioning to ICT. Completed training. Currently receiving generic email reminders he ignores.
A live placement feed showing employer views, match scores, and specific skill gaps. The system delivers value before asking for anything. Engagement becomes rational.
Receives candidate referrals from NTUC LHub. Currently frustrated by poor match quality and reactive placement process.
A live pipeline of candidates currently in training who match his profile — before they're available. The system learns his revealed preferences over time. Bad matches become rare.
Prepares government reports for SSG monthly. Currently compiling manually from Excel under deadline pressure.
Reports that write themselves. A continuous compliance state updated in real time. One-action submission. Her job becomes certification, not compilation.
The most critical and most neglected journey in the system
Hover over any component to understand its role, its technology, and why it was designed that way.
The AI ecosystem lives in Python. Keeping the backend in the same language as the intelligence layer eliminates serialisation overhead for every agent call. FastAPI gives async performance comparable to Node.js for I/O-bound workloads.
Open source, self-hostable, with hybrid search combining semantic vectors and keyword matching in one query. Pure semantic search misses exact credential matches. Pure keyword misses semantic similarity. Weaviate handles both.
Outcome pathway intelligence is a graph problem. Which course feeds which employer type, which profile trajectories produce durable placements — Cypher traversals answer these elegantly where SQL joins across five tables fail.
Frontier reasoning for complex assessment. Local Ollama models (Llama 3 / Mistral) for high-frequency routine tasks at zero marginal cost. LLM layer is abstracted — switching providers is a config change, not a rebuild.
Kafka-compatible API, single binary deployment, no ZooKeeper. All the architectural benefits of Kafka without a dedicated ops specialist. RabbitMQ loses event replay. Redis Pub/Sub loses messages on consumer downtime.
Tool calling, memory injection, chain composition — solved problems. Building this from scratch has no strategic value. LlamaIndex is better for document RAG specifically; LangChain wins on general multi-agent orchestration.
Competing solutions store data. COMPASS builds knowledge. Every outcome feeds back into the next decision. The system widens its advantage over time — not just at launch. No competitor has closed the feedback loop between placement outcomes and enrolment decisions.
Most platforms surface information and wait for humans to respond. COMPASS deploys agents that monitor, act autonomously on routine tasks, and surface only decisions that require judgment. The cognitive load reduction is structural, not incremental.
Competitors address matching at placement stage — after the damage is done. COMPASS addresses it at enrolment, where poor decisions cascade into every downstream failure. This is the highest-leverage intervention in the entire system.
The ghosting problem is a design failure, not a communication failure. COMPASS delivers visible value before asking for anything. Engagement becomes rational. The system earns participation rather than extracting it through reminder volume.
Entire stack is open source. No per-seat licences. No proprietary platform fees. LLM layer is abstracted — switching providers is a config change. The cost of scaling is compute and data volume, not vendor tiers.
GridTrace and Cassandra.ai prove the architectural patterns work. Institutional memory agents that learn from outcomes and improve with every session. This is not theoretical — it has been built, demonstrated, and validated.
Two identical applications submitted an hour apart should not receive different course recommendations. In a regulated programme, non-determinism is a compliance liability.
Temperature set to zero for assessment tasks. Every decision logged with full prompt, context, model version, and output. Recommendations frozen at decision time. Human sign-off required before any recommendation becomes an action.
If trainees don't engage, the memory layer stays thin, matching quality stays low, and placement outcomes don't improve. The entire system depends on data only trainees can provide.
The portal delivers visible value before asking for anything. First session shows live placement pipeline activity. WhatsApp-first reduces portal dependency. The system gives before it takes.
COMPASS tracks coordinator interventions and salesperson override rates. If staff perceive this as performance monitoring, adoption fails regardless of technical quality.
Staff behaviour data used exclusively to improve system recommendations, never for performance management. This commitment is contractual and visible in the product itself.
Four stores with no native transaction boundary. A partial write — profile updated in PostgreSQL but embedding stale in Weaviate — means the matching agent reasons from outdated data.
Saga pattern via the event bus. Each store update is an idempotent event consumer. Nightly reconciliation job compares checksums across stores and flags drift.
The three-store memory architecture, the full agent suite, and the MyInfo integration represent the production system. For the prototype, a single-store implementation (PostgreSQL + pgvector) demonstrates the intelligence layer end-to-end. This is honest and defensible — evaluators are assessing the thinking. The prototype shows the thinking works. The architecture shows what it becomes.
COMPASS is ready to build. The architecture is defined. The technology is chosen. The risks are mapped. The prototype scope is clear. What's needed is selection — and the opportunity to build the intelligence infrastructure Singapore's workforce development ecosystem has needed for years.
"Not selecting this is not a conservative choice — it is choosing a system that will automate today's processes and stagnate. COMPASS is the only submission that will still be getting better in year three."