FAQ

Seen Live detects where your tracks are being played in the real world. DJ sets, clubs, festivals, filmed performances — we scan public videos on YouTube, TikTok, and Instagram using audio fingerprinting and tell you exactly who played your music, where, and when. No manual searching. No guessing. Automated proof.

You upload your track. We generate a unique audio fingerprint — a digital signature of your sound — and start scanning public video content across platforms. When a match is found — your track in a DJ set, a club video, a festival recording — you get the detection: DJ name, venue, city, date, platform. The technology works even in noisy real-world conditions — crowd noise, pitch shifts, tempo variations.

YouTube, TikTok, and Instagram. These are where the vast majority of filmed DJ sets, club videos, and festival content are posted publicly. We scan public content only — nothing private or behind login walls.

Audio fingerprinting is built for real-world conditions. Tempo changes, pitch shifts, EQ adjustments, crowd noise layered on top — the technology handles these. A DJ playing your track at +4 BPM with the bass boosted at 3am in a warehouse will still trigger a match. That said, extremely short snippets or heavily distorted edits may reduce detection confidence.

It could mean your track hasn't been played in a filmed context that our scan covers. We'll be transparent about it. We don't fake detections and we don't inflate numbers. If there's nothing to show, we tell you — and we share ways to increase your chances of live exposure. Honesty is part of the deal.

No. Your files are uploaded strictly for technical analysis — generating an audio fingerprint and running scans. You retain full ownership. We don't redistribute, publish, or exploit your music in any way. We use a limited, non-exclusive license strictly necessary for the service to function. A notice-and-takedown mechanism is in place if needed.

The platform complies with GDPR. We collect only what's needed to operate the service. Your data isn't sold to third parties. Bases: contractual execution, legitimate interest, consent where required. Full details in our Privacy Policy.

No. We only scan publicly available content. If a video is private, unlisted, or behind a login wall, it's outside our scope. And we don't share your uploaded track with anyone — the audio file stays within our detection pipeline.

Yes. Seen Live scans continuously. If that DJ plays your track six months from now at a festival and someone films it, we'll detect it then. Your fingerprint stays active. Detection isn't a one-time snapshot — it's an ongoing process.

Yes. The number of tracks you can monitor simultaneously depends on your plan. During beta, we'll share specifics on what's available. The core experience starts with uploading at least one track and seeing your first detection.

No. That's exactly what Seen Live solves. You don't need to know who, where, or when. You upload your track, and we find the answers. The whole point is that you stop searching manually and start receiving proof automatically.

Beta is open now — we're onboarding artists ahead of the mid-2026 launch. Join the waiting list to get early access. The first product to go live will be ID (detection). Press and Floor follow once enough detection data has accumulated to make the ranking and the EPK meaningful.

You're in. We'll reach out when it's your turn to access the beta. Early access artists will be the first to upload tracks, test detection, and start building their live history before anyone else. The data you accumulate during beta stays — it's yours from day one.

BEHIND THE SCENES

HOW WE DETECT

Your track is converted into an acoustic fingerprint. A unique spectral signature derived from frequency, amplitude, and temporal patterns. This fingerprint is resolution-independent: it survives pitch shifts, tempo changes, compression artifacts, and environmental noise.

The fingerprint is matched against public video content indexed from YouTube, TikTok, and Instagram. Matching operates on audio segments, not metadata. A 30-second fragment inside a 90-minute DJ set is enough to trigger a detection.

Each confirmed match returns structured data: source URL, video uploader, upload date, view count, match duration, percentage of your track used, percentage of the video containing your track, and whether the audio was modified. When available: DJ identification, venue, city, and event date are extracted from contextual metadata.

This is ACR — Automatic Content Recognition. Same foundational layer as Shazam. Different use case: reverse detection across user-generated content in real acoustic conditions.

SIGNAL SPECS

RECOGNITION

AUDIO FINGERPRINTING (ACR)

MATCH RESOLUTION

SEGMENT-LEVEL - PARTIAL TRACK DETECTION

AUDIO TOLERANCE

PITCH SHIFT - TEMPO CHANGE - COMPRESSION - AMBIENT NOISE

CONFIDENCE

THRESHOLD-BASED - FALSE POSITIVES MAY OCCUR

DETECTION OUTPUT

SOURCE URL - UPLOADER - VIEW COUNT - MATCH DURATION

SCAN MODE

CONTINUOUS - AUTOMATED

DATA ACCESS

PUBLIC CONTENT ONLY - NO LOGIN WALLS

FEEDS INTO

ID - FLOOR - PRESS

NOW YOU KNOW

The signal is already there.

We just built the infrastructure to read it.

BACK TO HOME →

SEEN LIVE

© 2026 CIRCL. All rights reserved.