FAQ
BEHIND THE SCENES
HOW WE DETECT
Your track is converted into an acoustic fingerprint. A unique spectral signature derived from frequency, amplitude, and temporal patterns. This fingerprint is resolution-independent: it survives pitch shifts, tempo changes, compression artifacts, and environmental noise.
The fingerprint is matched against public video content indexed from YouTube, TikTok, and Instagram. Matching operates on audio segments, not metadata. A 30-second fragment inside a 90-minute DJ set is enough to trigger a detection.
Each confirmed match returns structured data: source URL, video uploader, upload date, view count, match duration, percentage of your track used, percentage of the video containing your track, and whether the audio was modified. When available: DJ identification, venue, city, and event date are extracted from contextual metadata.
This is ACR — Automatic Content Recognition. Same foundational layer as Shazam. Different use case: reverse detection across user-generated content in real acoustic conditions.
SIGNAL SPECS
RECOGNITION
AUDIO FINGERPRINTING (ACR)
MATCH RESOLUTION
SEGMENT-LEVEL - PARTIAL TRACK DETECTION
AUDIO TOLERANCE
PITCH SHIFT - TEMPO CHANGE - COMPRESSION - AMBIENT NOISE
CONFIDENCE
THRESHOLD-BASED - FALSE POSITIVES MAY OCCUR
DETECTION OUTPUT
SOURCE URL - UPLOADER - VIEW COUNT - MATCH DURATION
SCAN MODE
CONTINUOUS - AUTOMATED
DATA ACCESS
PUBLIC CONTENT ONLY - NO LOGIN WALLS
FEEDS INTO
ID - FLOOR - PRESS
NOW YOU KNOW
The signal is already there.
We just built the infrastructure to read it.
BACK TO HOME →