Arkansas Staffing
Senior Data Engineer
We are seeking a highly motivated, innovative, and collaborative Senior Data Engineer to join the Enterprise Technology Data Engineering & AI team. In this role, you will shape the next-generation data and AI platform, architecting and building modern, cloud-native pipelines (batch, streaming, and GenAI-enabled) to transform structured and unstructured data into trusted, self-service insights. This position will work closely with senior stakeholders across multiple business areas, lead technical initiatives, and mentor other engineers. This is an in-office position in West Hollywood, CA. Key Responsibilities: Unified Data & AI Platform: Architect and develop a lake-house stack and data management platform using tools such as Redshift or BigQuery, Airbyte, and Airflow (or similar). Streaming & Distributed Processing: Design real-time/streaming and batch pipelines to support event-driven analytics and near-instant insights, leveraging technologies like Kafka or Spark. End-to-End Pipelines: Build resilient ELT/ETL flows for relational, semi-structured, and unstructured data (e.g., PDFs, images, logs) with automated testing, lineage, and governance. GenAI by Default: Integrate AI into workflows document intelligence, prompt-driven data quality checks, automated documentation, AI-assisted code development, and conversational data exploration. Reporting & Dashboards: Develop self-service semantic layers and dashboards in Tableau, Looker, or Superset to enable real-time exploration by stakeholders. Advanced Analytics & Forecasting: Collaborate with Finance, HR, and other functions to create predictive models, scenario analyses, and forecasting tools. Stakeholder Partnership: Gather requirements, translate business logic into elegant data models, and promote best practices across global teams. Technical Mentorship & Governance: Set coding standards, review designs, and advocate for secure, cost-effective architecture. Requirements Minimum Qualifications: Bachelor's or Master's degree in Computer Science or similar field of study Minimum of 8 years designing and operating data platforms for structured and unstructured workloads. Must have experience with Python, building data pipelines, advanced SQL, Joints, and cloud warehouses (Redshift, BigQuery) Proven expertise with Airflow, Airbyte, real-time streaming (Kafka, Kinesis, or Pub/Sub), and distributed processing (Spark). Strong knowledge of dimensional and wide-table data modeling and performance tuning. Experience applying GenAI/LLMs for document processing, conversational analytics, or code generation. Background in building predictive models, time-series forecasts, and scenario projections. Familiarity with CI/CD (GitHub), containerization (Kubernetes/Helm), and infrastructure-as-code. Excellent communication skills with the ability to align technical architecture to executive strategy.
We are seeking a highly motivated, innovative, and collaborative Senior Data Engineer to join the Enterprise Technology Data Engineering & AI team. In this role, you will shape the next-generation data and AI platform, architecting and building modern, cloud-native pipelines (batch, streaming, and GenAI-enabled) to transform structured and unstructured data into trusted, self-service insights. This position will work closely with senior stakeholders across multiple business areas, lead technical initiatives, and mentor other engineers. This is an in-office position in West Hollywood, CA. Key Responsibilities: Unified Data & AI Platform: Architect and develop a lake-house stack and data management platform using tools such as Redshift or BigQuery, Airbyte, and Airflow (or similar). Streaming & Distributed Processing: Design real-time/streaming and batch pipelines to support event-driven analytics and near-instant insights, leveraging technologies like Kafka or Spark. End-to-End Pipelines: Build resilient ELT/ETL flows for relational, semi-structured, and unstructured data (e.g., PDFs, images, logs) with automated testing, lineage, and governance. GenAI by Default: Integrate AI into workflows document intelligence, prompt-driven data quality checks, automated documentation, AI-assisted code development, and conversational data exploration. Reporting & Dashboards: Develop self-service semantic layers and dashboards in Tableau, Looker, or Superset to enable real-time exploration by stakeholders. Advanced Analytics & Forecasting: Collaborate with Finance, HR, and other functions to create predictive models, scenario analyses, and forecasting tools. Stakeholder Partnership: Gather requirements, translate business logic into elegant data models, and promote best practices across global teams. Technical Mentorship & Governance: Set coding standards, review designs, and advocate for secure, cost-effective architecture. Requirements Minimum Qualifications: Bachelor's or Master's degree in Computer Science or similar field of study Minimum of 8 years designing and operating data platforms for structured and unstructured workloads. Must have experience with Python, building data pipelines, advanced SQL, Joints, and cloud warehouses (Redshift, BigQuery) Proven expertise with Airflow, Airbyte, real-time streaming (Kafka, Kinesis, or Pub/Sub), and distributed processing (Spark). Strong knowledge of dimensional and wide-table data modeling and performance tuning. Experience applying GenAI/LLMs for document processing, conversational analytics, or code generation. Background in building predictive models, time-series forecasts, and scenario projections. Familiarity with CI/CD (GitHub), containerization (Kubernetes/Helm), and infrastructure-as-code. Excellent communication skills with the ability to align technical architecture to executive strategy.