BravoTech
Senior Data Engineer (Developer) In-office position
Salary : Up to $145K
Location : Dallas
Join Our Innovative MediaLab Team as a Senior Data Engineer
Are you passionate about designing cutting-edge data solutions and eager to contribute to a dynamic, collaborative environment? We are seeking a seasoned Senior Data Engineer to join our Shared Services MediaLab team. In this role, you'll be at the forefront of building scalable, robust data infrastructures that power our strategic initiatives, from real-time analytics to advanced AI applications.
As a Senior Data Engineer, you will architect, develop, and maintain sophisticated data pipelines and infrastructure across cloud-native platforms such as Azure and AWS. You will work closely with data scientists, business analysts, IT teams, and stakeholders to deliver solutions that enable data-driven decision-making and support emerging AI and ML projects. Your expertise will help ensure our data environment is secure, scalable, and optimized for future growth.
What You’ll Do Data Architecture & Infrastructure
Design and deploy enterprise-grade data architectures supporting structured, semi-structured, and unstructured data across multiple cloud platforms (Azure, AWS).
Implement scalable data lakes and data warehouses optimized for both batch and real-time workloads.
Develop and maintain data mesh architectures to facilitate self-service analytics while ensuring robust data governance and security.
Architect cloud-native solutions utilizing serverless computing, containerization, and microservices.
Data Pipeline Development
Build and orchestrate reliable, fault-tolerant data pipelines using modern ELT methodologies and tools like Apache Airflow, Azure Data Factory, and AWS Glue.
Develop real-time streaming solutions with Apache Kafka, Apache Pulsar, and cloud-native services to support live data processing needs.
Implement automated data quality frameworks with monitoring, alerting, and auto-remediation capabilities.
Create CI / CD pipelines for data workflows, incorporating automated testing, deployment, and rollback procedures.
AI & Advanced Analytics Integration
Embed machine learning workflows into data pipelines, enabling feature engineering, model training, and large-scale inference.
Support MLOps practices with model versioning, A / B testing, and automated retraining pipelines.
Build infrastructure to support generative AI initiatives, including vector databases and retrieval-augmented generation (RAG) systems.
Collaborate with data scientists and developers to produce scalable ML models and ensure efficient inference.
Data Governance & Security
Establish comprehensive data governance frameworks, including data lineage, metadata management, and cataloging.
Ensure compliance with privacy laws (GDPR, CCPA) by implementing data masking, encryption, and strict access controls.
Maintain audit trails for data processing activities and model predictions.
Performance & Monitoring
Optimize data processing performance via query tuning, indexing, and resource management.
Implement observability strategies, including metrics, logging, and distributed tracing for all data pipelines.
Conduct root cause analyses and resolve data quality or system performance issues swiftly.
Define and maintain SLAs for data freshness, accuracy, and system uptime.
Collaboration & Leadership
Work closely with cross-functional teams to gather requirements and deliver impactful solutions.
Provide technical mentorship to junior engineers and analysts.
Lead technical design reviews and contribute to strategic technology planning.
Document best practices, data architectures, and system workflows; lead knowledge-sharing initiatives.
Qualifications
Minimum of 7+ years of hands‑on experience in data engineering, with a proven track record of building and maintaining large‑scale production data systems.
Strong experience working directly with internal clients and stakeholders.
Extensive expertise in database development and management.
Bachelor's degree in Computer Science, Information Technology, or a related field (Master's preferred).
Proven experience in the following technical areas
Cloud platforms : Azure & AWS (data services, infrastructure management)
Data frameworks : Apache Spark (PySpark, Scala, Java), Hadoop ecosystem, Databricks
Real‑time processing : Kafka, Pulsar, Kinesis, Event Hubs
Data storage : SQL, NoSQL, NewSQL (PostgreSQL, MongoDB, Cassandra, Snowflake)
Data modeling and warehousing : dimensional modeling, star / snowflake schemas, SCD
ETL / ELT tools : Azure Data Factory, Fabric Dataflow, AWS Glue, dbt
Infrastructure as Code : Terraform, Bicep, CloudFormation
Containerization & Orchestration : Docker, Kubernetes
Data lakehouse architectures : Delta Lake, Apache Iceberg, Apache Hudi
MLOps tools and workflows : MLflow, Kubeflow, SageMaker
Vector databases and embedding techniques for AI applications
Observability tools : DataDog, Splunk, Prometheus, Grafana
CI / CD pipelines : Azure DevOps, GitHub Actions, Jenkins
Security best practices for cloud and data environments
Strong programming skills in Python, C#, and / or Scala.
Deep understanding of distributed systems, fault tolerance, and high‑availability architectures.
Excellent problem‑solving, communication, and collaboration skills.
Preferred Certifications & Additional Qualifications
Microsoft Certified : Azure Data Engineer Associate (DP-203)
Microsoft Certified : Fabric Data and Analytics Engineer
AWS Certified Data Engineer – Associate
Databricks Certified Data Engineer
SnowPro Advanced Data Engineer & Architect
Experience with graph databases, knowledge graphs, and compliance frameworks in data privacy.
What We Offer
Exciting opportunities to work on innovative AI‑driven data projects.
Collaboration with a talented, motivated team of professionals.
Opportunities for professional growth and skills development.
A dynamic environment that encourages innovation and continuous learning.
Ready to shape the future of data at our company? Apply now and become a key driver of our data‑driven success!
Note : This position is based in-office. Candidates must be authorized to work locally.
#J-18808-Ljbffr
Salary : Up to $145K
Location : Dallas
Join Our Innovative MediaLab Team as a Senior Data Engineer
Are you passionate about designing cutting-edge data solutions and eager to contribute to a dynamic, collaborative environment? We are seeking a seasoned Senior Data Engineer to join our Shared Services MediaLab team. In this role, you'll be at the forefront of building scalable, robust data infrastructures that power our strategic initiatives, from real-time analytics to advanced AI applications.
As a Senior Data Engineer, you will architect, develop, and maintain sophisticated data pipelines and infrastructure across cloud-native platforms such as Azure and AWS. You will work closely with data scientists, business analysts, IT teams, and stakeholders to deliver solutions that enable data-driven decision-making and support emerging AI and ML projects. Your expertise will help ensure our data environment is secure, scalable, and optimized for future growth.
What You’ll Do Data Architecture & Infrastructure
Design and deploy enterprise-grade data architectures supporting structured, semi-structured, and unstructured data across multiple cloud platforms (Azure, AWS).
Implement scalable data lakes and data warehouses optimized for both batch and real-time workloads.
Develop and maintain data mesh architectures to facilitate self-service analytics while ensuring robust data governance and security.
Architect cloud-native solutions utilizing serverless computing, containerization, and microservices.
Data Pipeline Development
Build and orchestrate reliable, fault-tolerant data pipelines using modern ELT methodologies and tools like Apache Airflow, Azure Data Factory, and AWS Glue.
Develop real-time streaming solutions with Apache Kafka, Apache Pulsar, and cloud-native services to support live data processing needs.
Implement automated data quality frameworks with monitoring, alerting, and auto-remediation capabilities.
Create CI / CD pipelines for data workflows, incorporating automated testing, deployment, and rollback procedures.
AI & Advanced Analytics Integration
Embed machine learning workflows into data pipelines, enabling feature engineering, model training, and large-scale inference.
Support MLOps practices with model versioning, A / B testing, and automated retraining pipelines.
Build infrastructure to support generative AI initiatives, including vector databases and retrieval-augmented generation (RAG) systems.
Collaborate with data scientists and developers to produce scalable ML models and ensure efficient inference.
Data Governance & Security
Establish comprehensive data governance frameworks, including data lineage, metadata management, and cataloging.
Ensure compliance with privacy laws (GDPR, CCPA) by implementing data masking, encryption, and strict access controls.
Maintain audit trails for data processing activities and model predictions.
Performance & Monitoring
Optimize data processing performance via query tuning, indexing, and resource management.
Implement observability strategies, including metrics, logging, and distributed tracing for all data pipelines.
Conduct root cause analyses and resolve data quality or system performance issues swiftly.
Define and maintain SLAs for data freshness, accuracy, and system uptime.
Collaboration & Leadership
Work closely with cross-functional teams to gather requirements and deliver impactful solutions.
Provide technical mentorship to junior engineers and analysts.
Lead technical design reviews and contribute to strategic technology planning.
Document best practices, data architectures, and system workflows; lead knowledge-sharing initiatives.
Qualifications
Minimum of 7+ years of hands‑on experience in data engineering, with a proven track record of building and maintaining large‑scale production data systems.
Strong experience working directly with internal clients and stakeholders.
Extensive expertise in database development and management.
Bachelor's degree in Computer Science, Information Technology, or a related field (Master's preferred).
Proven experience in the following technical areas
Cloud platforms : Azure & AWS (data services, infrastructure management)
Data frameworks : Apache Spark (PySpark, Scala, Java), Hadoop ecosystem, Databricks
Real‑time processing : Kafka, Pulsar, Kinesis, Event Hubs
Data storage : SQL, NoSQL, NewSQL (PostgreSQL, MongoDB, Cassandra, Snowflake)
Data modeling and warehousing : dimensional modeling, star / snowflake schemas, SCD
ETL / ELT tools : Azure Data Factory, Fabric Dataflow, AWS Glue, dbt
Infrastructure as Code : Terraform, Bicep, CloudFormation
Containerization & Orchestration : Docker, Kubernetes
Data lakehouse architectures : Delta Lake, Apache Iceberg, Apache Hudi
MLOps tools and workflows : MLflow, Kubeflow, SageMaker
Vector databases and embedding techniques for AI applications
Observability tools : DataDog, Splunk, Prometheus, Grafana
CI / CD pipelines : Azure DevOps, GitHub Actions, Jenkins
Security best practices for cloud and data environments
Strong programming skills in Python, C#, and / or Scala.
Deep understanding of distributed systems, fault tolerance, and high‑availability architectures.
Excellent problem‑solving, communication, and collaboration skills.
Preferred Certifications & Additional Qualifications
Microsoft Certified : Azure Data Engineer Associate (DP-203)
Microsoft Certified : Fabric Data and Analytics Engineer
AWS Certified Data Engineer – Associate
Databricks Certified Data Engineer
SnowPro Advanced Data Engineer & Architect
Experience with graph databases, knowledge graphs, and compliance frameworks in data privacy.
What We Offer
Exciting opportunities to work on innovative AI‑driven data projects.
Collaboration with a talented, motivated team of professionals.
Opportunities for professional growth and skills development.
A dynamic environment that encourages innovation and continuous learning.
Ready to shape the future of data at our company? Apply now and become a key driver of our data‑driven success!
Note : This position is based in-office. Candidates must be authorized to work locally.
#J-18808-Ljbffr