Argos Project Portal

Enter the password to access this portal.

Active Argos - NAP Solutions
Project Kickoff - March 2026

Argos: AI-Powered Counterfeit Detection for NAP Solutions

A new AI-powered crawling platform replacing NAP's existing Artemis scraper with intelligent pre-filtering, image-based detection, and confidence scoring - reducing manual review workload by 70–80%.

Duration
~23 Weeks
Investment
$50,820
Pilot Platforms
Amazon + Shopify
Client
NAP Solutions

Team

FT
Frances Marie Teves
Technical Project Manager
Client-facing PM, weekly updates, scope management
GQ
Gee Quidet
Chief Revenue & Solutions Officer
Accounts management, client relationship
MA
Mary Amora
Solutions Consultant
Technical solutions, architecture guidance
IN
Ian Natividad
Lead Developer
Full-stack development, AI integration, crawlers
RL
Ragan Lamoc
Developer
Backend development, build support

What is Argos?

Argos is a standalone, AI-powered counterfeit detection platform. It crawls e-commerce platforms, uses a multi-model AI engine (visual + text) to score listings for potential infringement, and presents results to your QA team for review. Fully independent infrastructure - no dependencies on legacy Artemis.

Detection Pipeline

Step 1
Configure Campaign
Keywords, images, filters, seller whitelist
Step 2
Crawl Platform
Amazon / Shopify via Crawlee + anti-bot
Step 3
AI Scoring
Multi-model: image (60%) + text (40%), per-campaign weights
Step 4
QA Review
Accept / reject flagged listings
Step 5
Export
CSV, REST API, or webhooks

Multi-Model AI Engine

Local Tier (Free)
Fuzzball
40%
Fuzzy text matching for trademark ID
CLIP Pre-Screen
Filter
ViT-B/32 embeddings for fast image triage
Heavy Tier (Vision LLM)
Deep Image Compare
60%
GPT-4o Vision / Claude - visual similarity
AI Reasoning
Score
GPT-4o / Claude - contextual analysis
Provider Abstraction Layer - config-driven routing across OpenAI, Anthropic Claude, and Google Gemini with automatic fallback

Confidence Scoring

ScoreAction
70 – 100High Priority - QA review required
40 – 69Review - Optional QA review
0 – 39Auto-filtered - Excluded

Tech Stack

FrontendNext.js, React, TypeScript, TailwindCSS
BackendNode.js, Express, Drizzle ORM, JWT/RBAC
DatabasePostgreSQL 16 + pgvector
QueueBullMQ + Redis
CrawlerCrawlee + Playwright
InfraGoogle Cloud Platform (Cloud Run)

Scope

In Scope (This Engagement)

  • Amazon.com product listing crawling & detection
  • Shopify / branded store crawling & detection
  • AI-powered image + text scoring engine
  • QA review dashboard & admin panel
  • REST API, CSV export (ARTEMIS-compatible), webhooks
  • Cross-platform matching (Amazon ↔ Shopify)
  • Side-by-side validation against legacy system
  • Full source code handover, docs & training
  • 30-day post-handover support

Out of Scope (Evaluated, Not Included)

  • Walmart, Temu, Alibaba, AliExpress crawling (future phases)
  • Regional Amazon domains (.de, .co.uk, etc.) - only .com
  • Smart Search / historical learning from enforcement decisions
  • Dashboard & Statistics module (advanced analytics)
  • Crawling Priorities & automated scheduling (manual trigger only)
  • Automated enforcement actions (detection & review only)
  • Legacy Artemis modifications or decommissioning
  • Ongoing hosting & AI costs (passed to NAP at cost)
  • Extended support beyond 30-day window (retainer available)

Users & Roles

👤
Admin
NAP management & campaign leads
  • Create & configure campaigns
  • Set keywords, images, filters, whitelists
  • Adjust detection thresholds per category
  • Manage users and assign roles
  • View analytics & performance dashboards
  • Set crawling frequency & priorities
🔍
QA Reviewer
NAP's review team (~25 people)
  • Review AI-flagged listings in dashboard
  • Accept or reject with reasoning
  • Filter by date, campaign, country, score
  • Export approved results to CSV
  • Decisions train the AI over time
🔗
API User Post-MVP
External systems & integrations
  • Token-based API authentication
  • Query detection results programmatically
  • Utilize all available filters
  • Integrate with existing NAP tools
  • Webhook notifications for new results

User Stories

RoleUser Story
AdminAs an admin, I want to create and configure campaigns with keywords, reference images, and seller filters so the crawler targets the right listings.
AdminAs an admin, I want to set a seller whitelist so legitimate sellers are excluded from results.
AdminAs an admin, I want to adjust AI confidence thresholds per category to balance precision vs. recall.
QAAs a QA reviewer, I want to see AI-scored results in a sortable, filterable table for efficient review.
QAAs a QA reviewer, I want to accept or reject listings with a reason so the system learns over time.
QAAs a QA reviewer, I want to export approved results to CSV in ARTEMIS-compatible format.
AdminAs an admin, I want to filter results by seller location so the legal team can focus enforcement.
AdminAs an admin, I want to view crawl job status and detection analytics to monitor performance.
AdminAs an admin, I want automated alerts when AI spend exceeds budget thresholds.

Scope & Phases

Phase 1
Amazon Crawler & AI Engine
Weeks 1-12

Deliverables

  • Automated Amazon.com product listing crawler
  • Rate-limit management & anti-bot handling
  • AI-powered image comparison (multi-model: GPT-4o Vision, Claude, Gemini)
  • NLP text analysis (Fuzzball fuzzy matching + CLIP pre-screening)
  • Confidence scoring with AI reasoning
  • Campaign configuration (keywords, images, filters, whitelist)
  • Seller location filtering
  • QA review dashboard with accept/reject workflow
  • REST API for integration + CSV export
  • Alert & notification system

Weekly Breakdown

Wk 1-2 Project setup, architecture, DB schema, GCP infra
Wk 3-4 Core backend, auth (JWT/RBAC), API endpoints, BullMQ
Wk 5-7 Frontend app, campaign UI, QA review, admin dashboard
Wk 6-8 Amazon crawler (Crawlee + Playwright)
Wk 7-9 AI scoring engine (vision + text matching)
Wk 8-9 Data export & API layer, webhooks
Wk 9-11 Testing, QA, performance, deployment
Phase 2
Shopify & Cross-Platform
Weeks 13-17

Deliverables

  • Shopify store discovery and crawling engine
  • Cross-platform matching (Amazon ↔ Shopify)
  • Shopify-specific heuristics (store age, reviews, pricing)
  • Unified dashboard across both platforms
  • API User management (tokens, access control)
  • Seller location: best-effort (Shopify limitation)

Weekly Breakdown

Wk 13-15 Shopify crawler, store discovery, extraction
Wk 15-16 Cross-platform matching & heuristics
Wk 16-17 Dashboard integration, testing, validation
Gate: Phase 2 begins only after Phase 1 passes acceptance testing.
Phase 3
Validation & Handover
Weeks 17-21

Deliverables

  • Side-by-side testing (min 1,000 listings/day for 10 days)
  • Shadow mode → Supervised mode validation
  • Complete source code handover (Git transfer)
  • Architecture, API (Swagger), deployment docs
  • Operations runbook & data dictionary
  • Training: Ops (2x2hr), Technical (2x2hr), Platform (1x3hr)
  • 30-day post-handover support

Validation Process

Week 1 Shadow Mode: Runs alongside legacy, no production actions
Week 2 Supervised: 100% human review, legacy as fallback
Sign-off Match/exceed legacy on all KPIs for 5 business days
Rollback: Legacy stays in standby 30 days. Full rollback possible within 4 hours.

Project Timeline

Task
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Project Setup
Core Backend
Frontend App
Amazon Crawler
AI Scoring Engine
Data Export & API
Testing & Deploy
Shopify Crawler
Cross-Platform
Dashboard V2
Side-by-Side
Validation
Docs & Handover
Training
Buffer
Phase 1 (Wk 1-12) Phase 2 (Wk 13-17) Phase 3 (Wk 17-21) + Buffer (Wk 22-23)

Success Metrics & KPIs

≤ 5%
False Positive Rate
Adjustable per category
≤ 5%
False Negative Rate
Against known dataset
≤ 15 min
Crawl-to-Review
Up to 5,000 listings
≥ 95%
Precision
Actual counterfeits
≥ 99.5%
Uptime SLA
Production system

Phase 1 Success Criteria (Amazon)

All detection KPIs met for 5+ consecutive business days
Dashboard & API stable (no P1/P2 bugs for 7+ days)
System processes target daily listing volume
2+ NAP team members can independently operate dashboard
Miffy campaign produces actionable results

Phase 2 Success Criteria (Shopify)

Phase 1 sign-off completed (prerequisite)
Shopify crawler integrated and processing listings
Cross-platform matching (Amazon ↔ Shopify) operational
Unified dashboard across both platforms
Phase 2 acceptance testing passed

Phase 3 Success Criteria (Migration)

Argos matches/exceeds legacy on all KPIs (blind evaluation)
Automated comparison report validates 10 business days
All documentation delivered & accepted
NAP team completes all 3 training tracks
NAP team operates independently for 2 weeks (shadowed)

Adjustable Thresholds

All AI confidence thresholds - including image similarity, text matching sensitivity, and overall scoring - are configurable per category. No code changes required; admins can tune thresholds directly from the dashboard.

Risks & Mitigation

Platform API Changes
High Impact
Med. Prob.
Weekly monitoring of Amazon SP-API and Shopify changelogs. Weeks 22-23 buffer reserved.
Anti-Bot / CAPTCHA Evolution
High Impact
Med. Prob.
ScraperAPI + rotating proxies + CAPTCHA-solving. Proven approach from POC phase.
AI Detection Accuracy
High Impact
Low Prob.
Iterative training on NAP's labeled datasets. Adjustable thresholds. Human-in-the-loop feedback.
Third-Party Service Availability
Med. Impact
Med. Prob.
Provider-agnostic AI abstraction layer. Can swap between OpenAI, Claude, Gemini, or open-source.
Data Volume Scaling
Med. Impact
Low Prob.
Horizontal scaling on GCP (Cloud Run auto-scaling). Cost-per-listing decreases with volume.
Scope Creep
High Impact
Med. Prob.
Strict phase gates with clear entry/exit criteria. Change requests managed via sprint planning.

What We Need from NAP Solutions

Access & Credentials

  • Access to existing Artemis scraper source code
  • Sample datasets (known counterfeit + known legitimate listings)
  • Campaign guidelines / matrix (Miffy as pilot)
  • API keys or account access for testing platforms

Infrastructure

  • GCP billing account for ongoing AI & hosting costs
  • Decision on proxy service tier (ScraperAPI)
  • Preferred communication channel (Slack, Teams, or email)

Ongoing Collaboration

  • Designate primary technical contact (Ed / IT team)
  • Weekly sync availability (30-60 min)
  • Timely feedback on sprint demos (within 2-3 days)
  • QA team available for validation phase

Sign-offs Required

  • Phase 1 acceptance (before Phase 2 begins)
  • Validation pass/fail criteria confirmation
  • Migration sign-off (before legacy standby period)
  • Final handover acceptance

Next Steps

1
Kickoff Alignment
Confirm scope, timeline, communication cadence, and team introductions. (This meeting!)
2
Access & Credentials Exchange
NAP provides Artemis source code, sample datasets, Miffy campaign matrix, and GCP billing setup.
3
Environment Setup (Wk 1)
Dev environment provisioned, CI/CD pipeline configured, Miffy test campaign data loaded.
4
First Sprint Demo (Wk 2)
Working prototype of Amazon crawler with initial detection results. Real data flowing through the system.

How We Work

Sprint Cadence

2-week sprints with a demo at the end of each. Daily async status updates keep both sides aligned between demos.

Communication

Weekly updates via email or preferred channel. Ad-hoc communication available anytime - we're reachable beyond email.

Progress Reports

Regular accomplishment reports on completed tasks, upcoming work, and blockers. Transparent sprint velocity tracking.

Escalation Protocol

Any risk impacting timeline by more than 1 week is escalated within 24 hours with a proposed mitigation plan. No surprises.

Detailed Project Timeline

23 weeks from kickoff to handover, including a 2-week buffer.

Task
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Project Setup
Core Backend
Frontend App
Amazon Crawler
AI Scoring Engine
Data Export & API
Testing & Deploy
Shopify Crawler
Cross-Platform
Dashboard V2
Side-by-Side
Validation
Docs & Handover
Training
Buffer
Phase 1 (Wk 1-12) Phase 2 (Wk 13-17) Phase 3 (Wk 17-21) + Buffer (Wk 22-23)

Sprint Targets & Success Criteria

2-week sprints. Each sprint ends with a live demo to NAP stakeholders. Progress is measured by what you can see and test, not by tasks completed internally.

11 sprints · 23 weeks total
3 high-risk sprints (S1, S4, S6)
2 gate sprints requiring sign-off (S6, S9)
6 client dependencies flagged
PHASE 1
Amazon Crawler & AI Engine Weeks 1–12
Sprint 1 Weeks 1–2
HIGH RISK
Goal: Prove we can stand up the full environment and ingest real data
📺 Demo Deliverable
  • Live GCP environment with CI/CD pipeline running
  • Database schema walkthrough (ERD on screen)
  • User login flow working (admin + QA roles)
  • Miffy campaign test data loaded into system
✅ Definition of Done
  • Admin can log in and see empty dashboard
  • CI/CD deploys to staging on git push
  • Sample Miffy data visible in DB
🚨 Client Dependencies (CRITICAL)
  • By Day 3: Artemis source code access - we need to understand the legacy schema to design ours
  • By Day 5: Sample datasets (known counterfeit + known legitimate) - required for Sprint 4 AI training
  • By Day 5: Miffy campaign matrix/guidelines - this is our pilot campaign
  • By Week 1: GCP billing account linked - blocks all infrastructure provisioning
💡 PM Recommendation
Schedule a dedicated technical handoff session with Ed/IT team in Week 1. If credentials and source code aren't provided by Day 5, every downstream sprint shifts. This is the #1 schedule risk for the entire project.
Sprint 2 Weeks 3–4
Goal: Core backend is functional - users, roles, campaigns all working end-to-end
📺 Demo Deliverable
  • Admin creates a Miffy campaign with keywords + reference images
  • Role-based access: Admin sees everything, QA sees only review queue
  • Background job queue processing visible in admin panel
✅ Definition of Done
  • Admin can CRUD campaigns through the API
  • JWT auth + RBAC enforced on all endpoints
  • Job queue accepts crawl requests (even if crawler isn't built yet)
💡 PM Recommendation
Confirm the user roles & permissions matrix during this sprint. If NAP wants additional roles beyond Admin/QA/API, now is the time - adding roles later means reworking the auth layer.
Sprint 3 Weeks 5–6
Goal: The UI is real - NAP team can click through and give feedback
📺 Demo Deliverable
  • Campaign configuration UI (keywords, images, filters, whitelist)
  • QA review dashboard with mock AI-scored results
  • Admin dashboard with placeholder analytics
  • Seller location filtering UI
✅ Definition of Done
  • NAP team can log in and navigate all screens
  • Campaign creation works end-to-end (UI → API → DB)
  • QA reviewer can accept/reject mock listings
🔍 Client Action Needed
UI review within 3 business days of demo. Designate 1–2 NAP reviewers for fast feedback. UI changes requested after Sprint 4 may delay the crawler integration.
Sprint 4 Weeks 7–8
HIGH RISK
Goal: First real AI-scored results on live Amazon data - the moment of truth
📺 Demo Deliverable
  • Live crawl of Miffy campaign on Amazon.com
  • AI-scored results with confidence scores + reasoning
  • Side-by-side: Argos results vs. what Artemis would have found
  • First accuracy metrics (false positive/negative rates)
✅ Definition of Done
  • Crawler successfully processes 100+ Amazon listings
  • AI returns confidence scores for each listing
  • QA reviewer can review real results in the dashboard
  • No crawler blocks/bans from Amazon for 48+ hours
⚠️ Why This Sprint Is High Risk
  • Anti-bot detection - Amazon may block crawlers. ScraperAPI + proxy rotation mitigates, but first contact is unpredictable
  • AI accuracy - First real-world test of the scoring engine. May need threshold tuning
  • Depends on Sprint 1 data - If sample datasets were delayed, AI training quality suffers here
💡 PM Recommendation
Have NAP domain experts on standby during this sprint. When AI flags its first batch of listings, we need quick validation: “Is this actually suspicious?” Early feedback directly improves detection quality. We suggest a 30-min daily check-in during Weeks 7–8.
Sprint 5 Weeks 9–10
Goal: Full pipeline working end-to-end - crawl, score, review, export
📺 Demo Deliverable
  • Complete flow: campaign → crawl → AI score → QA review → CSV export
  • CSV format validated against ARTEMIS compatibility
  • Alert system: notifications when crawl completes, budget thresholds
  • REST API endpoints documented (Swagger)
✅ Definition of Done
  • QA team can do a full mock review session (10+ listings)
  • CSV export opens correctly in NAP's existing tools
  • API returns data matching dashboard view
🔍 Client Action Needed
Validate the CSV export format. If the ARTEMIS-compatible format needs adjustments, this is the last sprint to make changes without impacting Phase 1 acceptance.
Sprint 6 Weeks 11–12
HIGH RISK GATE
Goal: Phase 1 acceptance - all KPIs met, system stable, NAP signs off
📺 Demo Deliverable
  • Full system demo with live Amazon data
  • KPI dashboard: false positive/negative rates, crawl-to-review time, precision
  • System running 5+ consecutive days with no P1/P2 bugs
  • 2+ NAP team members independently operating the platform
✅ Acceptance Criteria
  • False positive/negative rates ≤ 5%
  • Crawl-to-review ≤ 15 min (up to 5K listings)
  • Precision ≥ 95%
  • Miffy campaign produces actionable results
  • NAP formal sign-off
🚫 Phase Gate
Phase 2 cannot begin until Phase 1 is signed off. If KPIs are not met, we use Weeks 22–23 buffer to remediate. Milestone 2 payment (35% / $17,787) is triggered on sign-off.
💡 PM Recommendation
Start acceptance testing in Week 11, not Week 12. This gives us a full week to fix any issues before the gate. Schedule the sign-off meeting for end of Week 12 with NAP decision-makers (not just the technical team).
PHASE 2
Shopify & Cross-Platform Weeks 13–17
Sprint 7 Weeks 13–14
Goal: Shopify crawling works - first Shopify-sourced results visible
📺 Demo Deliverable
  • Shopify store discovery engine running
  • First Shopify-sourced listings with AI scores
  • Shopify-specific heuristics (store age, reviews, pricing anomalies)
✅ Definition of Done
  • Crawler discovers and processes Shopify stores
  • Results appear in QA dashboard alongside Amazon results
  • Platform source clearly labeled per listing
🔍 Client Action Needed
Provide a list of known Shopify counterfeiters (if available) for validation. Also flag any Shopify-specific detection patterns your team has observed - this trains the heuristics engine.
Sprint 8 Weeks 15–16
Goal: Cross-platform intelligence - same seller detected across Amazon + Shopify
📺 Demo Deliverable
  • Cross-platform matching: Amazon ↔ Shopify linked sellers
  • Unified dashboard with platform filter
  • API User management (token generation, access control)
✅ Definition of Done
  • Multi-platform sellers flagged automatically
  • Dashboard filters by platform, campaign, date, score
  • Phase 2 acceptance criteria met
💡 PM Recommendation
Note on seller location for Shopify: Unlike Amazon, Shopify doesn't expose seller location reliably. We'll implement best-effort detection (WHOIS, shipping origins, payment info) but accuracy will be lower than Amazon. Set expectations with the legal team now.
Sprint 9 Weeks 17–18
GATE
Goal: Phase 2 sign-off + side-by-side validation begins
📺 Demo Deliverable
  • Phase 2 acceptance demo (Shopify + cross-platform)
  • Side-by-side comparison dashboard: Argos vs. legacy Artemis
  • Shadow mode activated - Argos running alongside legacy
✅ Definition of Done
  • Phase 2 signed off by NAP
  • Shadow mode processing 1,000+ listings/day
  • Automated comparison report generating daily
🚨 Client Dependencies
QA team (~25 people) must be available for parallel reviews starting this sprint. Legacy Artemis must remain running. Milestone 3 payment (25% / $12,705) triggered on Phase 2 sign-off.
PHASE 3
Validation & Handover Weeks 17–21 + Buffer (22–23)
Sprint 10 Weeks 19–20
Goal: Argos proven to match or exceed legacy - docs and training complete
📺 Demo Deliverable
  • 10-day blind evaluation results: Argos vs. legacy on all KPIs
  • Supervised mode: 100% human review with legacy as fallback
  • Architecture docs, API (Swagger), deployment runbook
  • Training sessions: Ops (2×2hr), Technical (2×2hr), Platform (1×3hr)
✅ Definition of Done
  • Argos matches/exceeds legacy on all KPIs for 5+ business days
  • All documentation delivered and accepted
  • NAP team completes all 3 training tracks
  • NAP team operates independently (shadowed by Symph)
🔍 Client Action Needed
Schedule training sessions now. 11 hours total across 3 tracks - needs calendar coordination with ops team, IT team, and platform users. NAP must designate who attends each track.
Sprint 11 Weeks 21–23
FINAL
Goal: Clean handover - NAP owns the system, Symph transitions to support
📺 Demo Deliverable
  • Complete source code transferred (Git repo handover)
  • Operations runbook & data dictionary
  • NAP team running system independently for 2+ weeks
  • 30-day post-handover support period begins
✅ Final Acceptance Criteria
  • All code, docs, credentials transferred
  • NAP confirms independent operation capability
  • Legacy standby period begins (30 days, rollback within 4hr)
  • Final sign-off from NAP
🎉 Milestone 4 Payment
Final 20% ($10,164) triggered on sign-off. Weeks 22–23 serve as buffer for any remaining items. Post-handover support (30 days) is included - extended support available via retainer.
💡 PM Recommendation
Discuss post-handover retainer options during this sprint. 30 days of support goes fast - if NAP wants ongoing AI model tuning, crawler maintenance, or feature work, scope it before the support window ends.
📣

Weekly Updates

Weekly status reports will appear here as the project progresses.

Investment & Payment Schedule

Milestone-based payments. Each requires written acceptance.

20%
Milestone 1
Project Kickoff
$10,164
35%
Milestone 2
Phase 1 Delivered
$17,787
25%
Milestone 3
Phase 2 Delivered
$12,705
20%
Milestone 4
Phase 3 Complete + Sign-off
$10,164
Total Project Investment: $50,820

Ongoing Costs (Post-Launch)

AI Inference Costs (monthly)

VolumeEstimated Cost
10,000 listings$120 - $180
50,000 listings$450 - $650
100,000 listings$800 - $1,100
500,000+ listings$3,200 - $4,500
Per-listing cost decreases with volume. Budget caps & alerts included.

Infrastructure (monthly, at cost)

ServiceEstimated Cost
Compute (Cloud Run/GKE)$30 - $80
Database (PostgreSQL)$10 - $40
Storage (GCS)$5 - $20
Hosting subtotal$45 - $140/mo
ScraperAPI (proxy, separate)~$49/mo
📁

Documents

Project documents, links, and shared resources will be organized here.