HighLevel
We're looking for a Senior Software Engineer (SDE-III) to join our Media Core Team, which powers the backbone of HighLevel's media infrastructure, enabling media streaming (video, audio, images), transcoding, DRM, and asset delivery at massive scale. Our Media Core platform processes more than 50 million monthly media streams, handles petabytes of media data, and delivers adaptive bitrate content to millions of users worldwide.
We manage the complete media lifecycle: upload, transcode, secure store, and deliver empowering creators to build video platforms, podcast platforms, course sites, and membership portals. As a Senior Software Engineer, you'll own critical services across our media stack, including transcoding pipelines, DRM-protected streaming, image processing, and CDN delivery infrastructure.
You'll work across the full stack, building media players, designing backend APIs, optimizing media processing workflows, and scaling distributed worker systems. This role requires deep expertise in media streaming systems, distributed architectures, and full-stack development. If you've worked on large-scale media platforms or scaled services under billions of requests, this is your team.
Leverage AI/LLM tools to accelerate development, refactoring, testing, and debugging across the media stack. Participate in design reviews, on‑call rotations, and technical deep dives to support a culture of operational excellence and ownership.
Responsibilities
Own and contribute to architecture and development across Media Core services, including transcoding, DRM, streaming, image processing, and CDN delivery. Design and implement high‑throughput media APIs with robust caching, message queues, and event‑driven architectures. Build and optimize media transcoding pipelines handling multi‑resolution encoding, audio processing, adaptive bitrate packaging, and cost‑efficient processing at scale. Develop DRM systems and secure media delivery infrastructure, including authentication, access control, and encryption pipelines. Architect and scale distributed worker systems for processing high volumes of media files with auto‑scaling and error recovery. Build high‑quality frontend experiences, including media players, upload widgets, and media management UIs. Optimize CDN delivery by implementing intelligent caching strategies and low‑latency streaming. Integrate observability, monitoring, and alerting systems to ensure platform reliability and rapid incident response. Debug complex production issues spanning frontend playback, backend transcoding, CDN configurations, and network performance. Participate in design reviews, on‑call rotations, and technical deep dives to support a culture of operational excellence and ownership. Leverage AI/LLM tools to accelerate development, refactoring, testing, and debugging across the media stack. Requirements
4+ years of hands‑on software engineering experience building and scaling robust backend systems and high‑performance frontend applications. Media streaming systems experience: Strong understanding of media transcoding, streaming protocols, CDN architecture, and media pipelines at scale. Strong backend engineering skills: Proficiency in at least one backend (Node.js, Go, Python, Java, or similar), TypeScript, distributed system design, API development, microservices architecture, and event‑driven systems. Media processing knowledge: Familiarity with media encoding tools, codec optimization, multi‑resolution encoding, and adaptive streaming. Experience in processing large media files efficiently. Frontend competence: Proficiency with modern frontend frameworks (React, Vue, Angular), advanced UI engineering patterns, component‑based architectures, state management, and CSS libraries like Bootstrap or Tailwind CSS. Cloud and infrastructure experience: Working knowledge of cloud platforms (Google Cloud Platform or AWS), container orchestration, and CI/CD pipelines. Distributed systems knowledge: Experience with message queues (e. g., Redis, Kafka, Pub/Sub), worker architectures, async processing, and handling high‑throughput workloads. Database proficiency: Experience with PostgreSQL, MongoDB, and Redis, along with designing data models for media metadata and access control. Performance optimization: Experience optimizing backend APIs, media processing workflows, and frontend playback. Familiarity with profiling and benchmarking. Security awareness: Understanding of authentication, authorization, DRM concepts, and secure media delivery practices. Observability and monitoring: Familiarity with monitoring and debugging tools (e. g., Grafana, Prometheus, Sentry), logging, error tracking, and debugging production issues. System design: Ability to design scalable solutions, understand distributed system patterns, and make informed cost/performance decisions. Ownership mindset: Track record of owning features end‑to‑end shipping, monitoring, debugging production issues, and iterating based on feedback. Excellent communication: Ability to document systems, collaborate cross‑functionally with PMs/designers, and contribute to technical discussions. Nice to have
Experience with NGINX or similar CDN/edge delivery technologies. Knowledge of DRM systems and content protection technologies. Hands‑on experience with real‑time streaming and WebRTC, Image processing pipeline experience, and format optimization. Experience with server‑side rendering for media applications. Familiarity with media analytics and telemetry systems. Open‑source contributions to media tools or libraries.
#J-18808-Ljbffr
Own and contribute to architecture and development across Media Core services, including transcoding, DRM, streaming, image processing, and CDN delivery. Design and implement high‑throughput media APIs with robust caching, message queues, and event‑driven architectures. Build and optimize media transcoding pipelines handling multi‑resolution encoding, audio processing, adaptive bitrate packaging, and cost‑efficient processing at scale. Develop DRM systems and secure media delivery infrastructure, including authentication, access control, and encryption pipelines. Architect and scale distributed worker systems for processing high volumes of media files with auto‑scaling and error recovery. Build high‑quality frontend experiences, including media players, upload widgets, and media management UIs. Optimize CDN delivery by implementing intelligent caching strategies and low‑latency streaming. Integrate observability, monitoring, and alerting systems to ensure platform reliability and rapid incident response. Debug complex production issues spanning frontend playback, backend transcoding, CDN configurations, and network performance. Participate in design reviews, on‑call rotations, and technical deep dives to support a culture of operational excellence and ownership. Leverage AI/LLM tools to accelerate development, refactoring, testing, and debugging across the media stack. Requirements
4+ years of hands‑on software engineering experience building and scaling robust backend systems and high‑performance frontend applications. Media streaming systems experience: Strong understanding of media transcoding, streaming protocols, CDN architecture, and media pipelines at scale. Strong backend engineering skills: Proficiency in at least one backend (Node.js, Go, Python, Java, or similar), TypeScript, distributed system design, API development, microservices architecture, and event‑driven systems. Media processing knowledge: Familiarity with media encoding tools, codec optimization, multi‑resolution encoding, and adaptive streaming. Experience in processing large media files efficiently. Frontend competence: Proficiency with modern frontend frameworks (React, Vue, Angular), advanced UI engineering patterns, component‑based architectures, state management, and CSS libraries like Bootstrap or Tailwind CSS. Cloud and infrastructure experience: Working knowledge of cloud platforms (Google Cloud Platform or AWS), container orchestration, and CI/CD pipelines. Distributed systems knowledge: Experience with message queues (e. g., Redis, Kafka, Pub/Sub), worker architectures, async processing, and handling high‑throughput workloads. Database proficiency: Experience with PostgreSQL, MongoDB, and Redis, along with designing data models for media metadata and access control. Performance optimization: Experience optimizing backend APIs, media processing workflows, and frontend playback. Familiarity with profiling and benchmarking. Security awareness: Understanding of authentication, authorization, DRM concepts, and secure media delivery practices. Observability and monitoring: Familiarity with monitoring and debugging tools (e. g., Grafana, Prometheus, Sentry), logging, error tracking, and debugging production issues. System design: Ability to design scalable solutions, understand distributed system patterns, and make informed cost/performance decisions. Ownership mindset: Track record of owning features end‑to‑end shipping, monitoring, debugging production issues, and iterating based on feedback. Excellent communication: Ability to document systems, collaborate cross‑functionally with PMs/designers, and contribute to technical discussions. Nice to have
Experience with NGINX or similar CDN/edge delivery technologies. Knowledge of DRM systems and content protection technologies. Hands‑on experience with real‑time streaming and WebRTC, Image processing pipeline experience, and format optimization. Experience with server‑side rendering for media applications. Familiarity with media analytics and telemetry systems. Open‑source contributions to media tools or libraries.
#J-18808-Ljbffr