Georgia System Operations Corporation
Data Engineer (Levels III - V)
Georgia System Operations Corporation, Tucker, Georgia, United States, 30084
Job Description
Job Description
This position, within the
Data Analytics Specialist
job family of IT, focuses on building and operating the data platforms, pipelines, and services that power analytics and decision‑making across the enterprise. You will design and maintain robust, scalable data integration and transformation processes, uphold data quality and governance standards, and collaborate closely with analysts and data scientists to deliver trusted, well‑modeled data for BI, AI/ML, and operational use cases. Our environment emphasizes the
Microsoft Azure data platform -including Azure Data Factory, Azure Synapse Analytics,
Databricks , and Azure Data Lake-alongside enterprise BI and governance tools. Experience with distributed data processing (e.g., Spark) and orchestration is highly valued. This position requires a strong customer service focus, positive attitude, and excellent oral and written communication skills. Responsible for compliance
with all applicable laws, regulations, industry standards, corporate policies, guidelines and procedures, including but not limited to,
RUS, OSHA, SOX, NERC, FERC and ITS
requirements. Promotes an environment of compliance and continuous improvement to meet the Corporation's goals and objectives. Job Duties: Data Pipeline Engineering:
Design, build, and maintain reliable ETL/ELT pipelines to extract from diverse sources, transform and validate data, and load to enterprise storage/warehouse layers; optimize for scalability, performance, and cost. Integration & Modeling:
Integrate data from databases, APIs, and external systems; enforce consistency and integrity; contribute to dimensional and lakehouse modeling patterns that support BI/AI use cases. Platform Engineering:
Leverage
Azure Data Factory ,
Synapse ,
Databricks , and
Spark
to standardize
ingestion/processing
frameworks; automate jobs, monitoring, and alerting for resilient operations. Performance & Reliability:
Tune pipelines, queries, and clusters; address bottlenecks; apply caching,
indexing/partitioning,
and workload management for dependable SLAs. Quality & Governance:
Implement in‑pipeline data quality checks and validation rules; document
lineage/assumptions ; contribute to cataloging and stewardship practices in partnership with data governance. Collaboration:
Partner with data analysts and scientists to productionize data for dashboards and models; translate business needs into technical designs and reusable data products. Continuous Improvement:
Evaluate emerging tools and methods (e.g., orchestration, streaming, cost/perf optimization); proactively enhance standards, templates, and developer experience. Required Qualifications: Education:
Bachelor's degree in Computer Science, Data Science, Software Engineering, Information Systems, or a related quantitative field; Master's degree preferred. Experience: Level III
- Minimum of
4 years
in data engineering (or closely related), including hands‑on pipeline development and operations. Level IV
- Minimum of
6 years
designing and managing large‑scale data solutions; leads project workstreams and cross‑functional delivery. Level V
- Minimum of
8 years
architecting and operating enterprise data platforms; standardizes patterns and provides technical leadership across IT. Equivalent Experience (in lieu of degree requirements above): Level III
- Minimum of
8 years
of relevant experience may also be considered. Level IV
- Minimum of
10 years
of relevant experience may also be considered. Level V
- Minimum of
12 years
of relevant experience may also be considered. Responsibility: Level III
- Independently delivers production‑grade pipelines and data models; contributes to standards; begins leading small initiatives. Level IV
- Leads the design and rollout of new data domains and frameworks; mentors junior engineers; partners with stakeholders to improve data product reliability and usability. Level V
- Oversees major data platform initiatives; sets best practices for modeling, orchestration, performance, security, and governance; recognized as a subject matter expert across the IT function. Licenses, Certifications, and/or Registrations (plus, not required): Microsoft Certified: Azure Data Engineer Associate ;
Microsoft Certified: Azure Solutions Architect Expert . Databricks Certified Data Engineer Associate . ITIL Foundation . Specialized Skills: Technical Expertise Proficiency with distributed data
processing/orchestration
(e.g.,
Apache Spark ,
Airflow ,
Kafka ) to build scalable pipelines and streaming/batch workloads. Strong programming skills in
Python
and/or
Java ; expert
SQL
for transformation and performance‑minded querying. Experience designing and deploying solutions on modern cloud data platforms, especially
Azure
(Data Factory, Synapse,
Databricks , ADLS); exposure to
Snowflake
is a plus. Data Architecture & Warehousing Knowledge of
lakehouse/warehouse
concepts (e.g., medallion layering, dimensional modeling, partitioning); experience with relational and NoSQL stores. Data Governance & Security Implement data quality checks, schema enforcement, and lineage; align with stewardship, cataloging, and compliance standards (e.g.,
SOX ) in partnership with IT and Security. Soft Skills Excellent problem‑solving/analytical skills and attention to detail; strong communication with both technical and business stakeholders; customer‑service orientation and positive attitude.
Georgia System Operations Corporation is an Equal Employment Opportunity Employer, including veterans and disabled. We are a drug-free workplace. All applicants are subject to substance abuse testing.
Job Posted by ApplicantPro
Job Description
This position, within the
Data Analytics Specialist
job family of IT, focuses on building and operating the data platforms, pipelines, and services that power analytics and decision‑making across the enterprise. You will design and maintain robust, scalable data integration and transformation processes, uphold data quality and governance standards, and collaborate closely with analysts and data scientists to deliver trusted, well‑modeled data for BI, AI/ML, and operational use cases. Our environment emphasizes the
Microsoft Azure data platform -including Azure Data Factory, Azure Synapse Analytics,
Databricks , and Azure Data Lake-alongside enterprise BI and governance tools. Experience with distributed data processing (e.g., Spark) and orchestration is highly valued. This position requires a strong customer service focus, positive attitude, and excellent oral and written communication skills. Responsible for compliance
with all applicable laws, regulations, industry standards, corporate policies, guidelines and procedures, including but not limited to,
RUS, OSHA, SOX, NERC, FERC and ITS
requirements. Promotes an environment of compliance and continuous improvement to meet the Corporation's goals and objectives. Job Duties: Data Pipeline Engineering:
Design, build, and maintain reliable ETL/ELT pipelines to extract from diverse sources, transform and validate data, and load to enterprise storage/warehouse layers; optimize for scalability, performance, and cost. Integration & Modeling:
Integrate data from databases, APIs, and external systems; enforce consistency and integrity; contribute to dimensional and lakehouse modeling patterns that support BI/AI use cases. Platform Engineering:
Leverage
Azure Data Factory ,
Synapse ,
Databricks , and
Spark
to standardize
ingestion/processing
frameworks; automate jobs, monitoring, and alerting for resilient operations. Performance & Reliability:
Tune pipelines, queries, and clusters; address bottlenecks; apply caching,
indexing/partitioning,
and workload management for dependable SLAs. Quality & Governance:
Implement in‑pipeline data quality checks and validation rules; document
lineage/assumptions ; contribute to cataloging and stewardship practices in partnership with data governance. Collaboration:
Partner with data analysts and scientists to productionize data for dashboards and models; translate business needs into technical designs and reusable data products. Continuous Improvement:
Evaluate emerging tools and methods (e.g., orchestration, streaming, cost/perf optimization); proactively enhance standards, templates, and developer experience. Required Qualifications: Education:
Bachelor's degree in Computer Science, Data Science, Software Engineering, Information Systems, or a related quantitative field; Master's degree preferred. Experience: Level III
- Minimum of
4 years
in data engineering (or closely related), including hands‑on pipeline development and operations. Level IV
- Minimum of
6 years
designing and managing large‑scale data solutions; leads project workstreams and cross‑functional delivery. Level V
- Minimum of
8 years
architecting and operating enterprise data platforms; standardizes patterns and provides technical leadership across IT. Equivalent Experience (in lieu of degree requirements above): Level III
- Minimum of
8 years
of relevant experience may also be considered. Level IV
- Minimum of
10 years
of relevant experience may also be considered. Level V
- Minimum of
12 years
of relevant experience may also be considered. Responsibility: Level III
- Independently delivers production‑grade pipelines and data models; contributes to standards; begins leading small initiatives. Level IV
- Leads the design and rollout of new data domains and frameworks; mentors junior engineers; partners with stakeholders to improve data product reliability and usability. Level V
- Oversees major data platform initiatives; sets best practices for modeling, orchestration, performance, security, and governance; recognized as a subject matter expert across the IT function. Licenses, Certifications, and/or Registrations (plus, not required): Microsoft Certified: Azure Data Engineer Associate ;
Microsoft Certified: Azure Solutions Architect Expert . Databricks Certified Data Engineer Associate . ITIL Foundation . Specialized Skills: Technical Expertise Proficiency with distributed data
processing/orchestration
(e.g.,
Apache Spark ,
Airflow ,
Kafka ) to build scalable pipelines and streaming/batch workloads. Strong programming skills in
Python
and/or
Java ; expert
SQL
for transformation and performance‑minded querying. Experience designing and deploying solutions on modern cloud data platforms, especially
Azure
(Data Factory, Synapse,
Databricks , ADLS); exposure to
Snowflake
is a plus. Data Architecture & Warehousing Knowledge of
lakehouse/warehouse
concepts (e.g., medallion layering, dimensional modeling, partitioning); experience with relational and NoSQL stores. Data Governance & Security Implement data quality checks, schema enforcement, and lineage; align with stewardship, cataloging, and compliance standards (e.g.,
SOX ) in partnership with IT and Security. Soft Skills Excellent problem‑solving/analytical skills and attention to detail; strong communication with both technical and business stakeholders; customer‑service orientation and positive attitude.
Georgia System Operations Corporation is an Equal Employment Opportunity Employer, including veterans and disabled. We are a drug-free workplace. All applicants are subject to substance abuse testing.
Job Posted by ApplicantPro