A professional-grade audio transcription and analysis platform built for musicological research, featuring real-time audio processing, stem separation, and a complete mixing console powered by Web Audio API.
- π΅ Real-time Audio Transcription - Convert audio to MIDI with advanced pitch detection
- ποΈ Professional Audio Mixer - Multi-channel mixing with Web Audio API
- π Stem Separation - Isolate individual instruments from audio files
- π Batch Processing - Process up to 50 files simultaneously
- πΌ Musical Notation - Generate sheet music and piano roll views
- ποΈ Mixing Console - Professional DAW-style interface with level meters
- π± Responsive Design - Works seamlessly on desktop and tablet
- π Beautiful UI - Modern design with shadcn/ui components
- π― Real-time Visualization - Audio waveforms and frequency analysis
- π₯ Enterprise Auth - Complete login/signup flows via Supabase
- π€ Team Collaboration - Invite members and manage roles (Viewer/Editor/Admin)
- π« Email System - Professional branded email templates with Resend SMTP
- π‘οΈ Role-Based Access - Granular permissions for project resources
# Install dependencies
npm install
# Setup Environment (.env.local)
NEXT_PUBLIC_SUPABASE_URL=your_url
NEXT_PUBLIC_SUPABASE_ANON_KEY=your_key
# Start development server
npm run devOpen http://localhost:3000 to see your application running or access production at Streamlit.
Experience the professional audio mixer by visiting /mixer-demo:
- Multi-channel mixing with volume, pan, and effects controls
- Real-time level meters with visual feedback
- Mute/Solo/Record functionality per channel
- Master volume control and effects routing
- Export/import mixer settings
- β‘ Next.js 15 - React framework with App Router
- π TypeScript 5 - Type-safe development
- π¨ Tailwind CSS 4 - Modern utility-first styling
- β‘ Supabase - Open source Firebase alternative
- π PostgreSQL - Robust relational database
- π Auth - Enterprise-grade authentication
- π¨ Resend - Reliable SMTP email delivery
- π΅ Web Audio API - Low-latency audio processing
- ποΈ Audio Nodes - Gain, Panner, Analyser for professional mixing
- π FFT Analysis - Real-time frequency and time-domain analysis
- πΌ MIDI Processing - Complete MIDI file generation and manipulation
- π§© shadcn/ui - High-quality accessible components
- π― Lucide React - Beautiful icon library
- π¨ Framer Motion - Smooth animations and transitions
- π Next Themes - Dark/light mode support
- π£ React Hook Form - Performant forms with validation
- β Zod - TypeScript-first schema validation
- π» Zustand - Simple state management
- π TanStack Query - Powerful data synchronization
src/
βββ app/ # Next.js App Router pages
β βββ auth/ # Authentication pages (login, signup, etc.)
β βββ mixer-demo/ # Professional audio mixer demo
β βββ stem-demo/ # Stem separation features
β βββ page.tsx # Main transcription interface
βββ components/ # Reusable React components
β βββ audio-mixer.tsx # Professional mixing console
β βββ enterprise-layout.tsx # App shell with auth context
β βββ collaboration.tsx # Team management UI
β βββ ui/ # shadcn/ui components
βββ lib/ # Audio processing utilities
β βββ supabase.ts # Database client & auth logic
β βββ audio-analysis.ts # Core audio analysis algorithms
β βββ audio-processor.ts # Audio file processing
β βββ stem-separation.ts # Instrument separation
β βββ midi-utils.ts # MIDI file generation
βββ hooks/ # Custom React hooks
- Pitch Detection - Advanced autocorrelation algorithms
- Note Extraction - Intelligent note onset detection
- Rhythm Analysis - Tempo and timing extraction
- Confidence Scoring - Quality metrics for transcription accuracy
- Note Grid - DAW-style visual note display
- Effects Processing - Reverb, delay, EQ simulation
- Note Editing - Click to add, select, and delete notes
- Automation Ready - Parameter automation framework
- Frequency Analysis - Band-based instrument separation
- AI-ready Architecture - Prepared for TensorFlow.js integration
- Instrument Detection - Automatic instrument identification
- Export Options - Individual stem export
// Analyze audio file
const audioBuffer = await decodeAudioFile(file);
const analysis = await analyzeAudio(audioBuffer, options);
// Process with mixer
const gainNode = audioContext.createGain();
const pannerNode = audioContext.createStereoPanner();
const analyserNode = audioContext.createAnalyser();<AudioMixer
channels={channels}
onChannelUpdate={handleChannelUpdate}
masterVolume={masterVolume}
onMasterVolumeChange={setMasterVolume}
isPlaying={isPlaying}
onPlayPause={handlePlayPause}
/>- π Security - Updated dependencies, vulnerability fixes
- β‘ Performance - Lazy loading, optimized audio processing
- π± Responsive - Mobile-first design principles
- π¨ Accessibility - WCAG compliant components
- π Internationalization - Multi-language support ready
- Stem Separation Guide - Instrument separation features
- Processing Settings Guide - Audio processing configuration
- Music Disciplines Guide - Research applications
- Ethnomusicological Studies - Field recording analysis
- Music Theory Research - Pattern and structure analysis
- Performance Analysis - Timing and expression study
- Music Production - Demo recording and arrangement
- Audio Engineering - Mixing and processing tools
- Sound Design - Audio effect and texture creation
- Music Theory - Visual learning tools
- Audio Engineering - Hands-on mixing experience
- Research Methods - Data collection and analysis
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Web Audio API - For powerful browser-based audio processing
- shadcn/ui - For beautiful and accessible UI components
- Next.js Team - For the excellent React framework
- Audio Research Community - For the algorithms and techniques that make this possible
Built with β€οΈ for the audio and research communities. π΅