Logo
Truist Bank

Software Engineer III- Data Engineer

Truist Bank, Atlanta, Georgia, United States, 30383

Save Job

The position is described below. If you want to apply click the Apply Now button at the top or bottom of this page. After you click Apply Now and complete your application youll be invited to create a profile which will let you see your application status and any communications. If you already have a profile with us you can log in to check status.

Need Help

If you have a disability and need assistance with the application you can request a reasonable accommodation. Send an email to Accessibility (accommodation requests only; other inquiries wont receive a response).

Regular or Temporary :

Regular

Language Fluency : English (Required)

Work Shift :

1st shift (United States of America)

Please review the following job description :

The Software Engineer III specializing in Data Engineering plays a pivotal role in designing developing and maintaining scalable data pipelines ETL (Extract Transform Load) processes and analytics solutions to support enterprise-wide data-driven decision-making. This position requires advanced expertise in data integration analytics and software development and involves close collaboration with cross-functional teams including data scientists business analysts and stakeholders to deliver impactful insights.

The role emphasizes innovation using platforms such as Informatica BDM AbInitio Snowflake and big data ecosystems like Hadoop while maintaining high standards for data quality security and compliance. The engineer advocates for agile methodologies CI / CD pipelines and automated testing to accelerate delivery and minimize risk. Responsibilities include leading and participating in the development testing implementation maintenance and support of complex solutions ensuring robust unit testing and support for release cycles. The engineer also builds monitoring capabilities provides escalated production support and maintains security controls in line with company standards. Typically this role leads moderately complex projects and contributes to larger initiatives solving complex technical and operational challenges and serving as a resource for less experienced teammates.

ESSENTIAL DUTIES AND RESPONSIBILITIES Following is a summary of the essential functions for this job. Other duties may be performed both major and minor which are not mentioned below. Specific activities may change from time to time.

Architect and implement robust ETL workflows using tools like Informatica PowerCenter AbInitio. Data mapping transformation logic error handling and performance optimization for high-volume data processing.

Design and develop data pipelines in Snowflake for efficient data warehousing querying and analytics leveraging features such as Snowpark for custom processing and zero-copy cloning for cost-effective data sharing.

Build and maintain distributed data processing systems on Hadoop ecosystems (e.g. Hive Spark HDFS) ensuring scalability fault tolerance and seamless integration with upstream and downstream systems.

Develop advanced SQL queries stored procedures and optimizations for both relational and NoSQL databases to support complex data extraction aggregation and reporting needs.

Create interactive dashboards visualizations and reports in Power BI integrating multiple data sources to enable self-service analytics and real-time business intelligence.

Perform data analytics tasks including exploratory data analysis statistical modeling and trend identification to derive actionable insights and support predictive analytics initiatives.

Collaborate on full-stack development using framework (C#) for backend services and JavaScript (including frameworks like React or ) for frontend data visualization tools and user interfaces.

Lead code reviews mentor junior engineers and contribute to technical design documents ensuring adherence to coding standards design patterns and security best practices (e.g. OWASP for web applications).

Troubleshoot and resolve production issues in data pipelines and applications implementing monitoring alerting and logging using tools such as Splunk or Azure Monitor.

Drive continuous improvement by adopting industry best practices including DevOps automation containerization (Docker / Kubernetes) and machine learning operations (MLOps) for data workflows.

Participate in agile ceremonies sprint planning and stakeholder meetings to align technical solutions with business objectives and deliver value iteratively.

QUALIFICATIONS Required Qualifications The requirements listed below are representative of the knowledge skill and / or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.

Degree and six to ten years of experience or equivalent education and software engineering training or experience

Depth knowledge in information systems and ability to identify apply and implement best practices

Understanding of key business processes and competitive strategies related to the IT function

To plan and manage projects and solve complex problems by applying best practices

To provide direction and mentor less experienced teammates. Ability to interpret and convey complex difficult or sensitive information

Preferred Qualifications

Bachelors degree in Computer Science Information Systems Engineering or a related field.

5 years of progressive experience in software engineering with at least 3 years focused on data engineering ETL development and data analytics.

Proven track record of delivering production-ready data solutions in fast-paced environments preferably in financial services healthcare or other regulated industries.

Strong problem-solving skills with the ability to handle ambiguous requirements and scale solutions for terabyte-scale datasets.

Core Data Engineering & ETL Skills

Advanced proficiency in Informatica PowerCenter for ETL design scheduling and workflow management.

Expertise in Snowflake for cloud data warehousing including SQL scripting data sharing and performance tuning.

Hands-on experience with Hadoop ecosystem (HDFS MapReduce Hive Spark) for big data processing and distributed computing.

Expert-level SQL skills across multiple databases (e.g. Oracle SQL Server PostgreSQL) including query optimization indexing and data modeling.

Analytics & Visualization

Strong experience with Power BI for dashboard development DAX scripting data modeling and integration with APIs / ODBC sources.

Proficiency in data analytics techniques using Python R or SQL for data cleaning statistical analysis and visualization.

Software Development

Solid experience (C# ) for building scalable backend services and APIs.

Proficiency in JavaScript / TypeScript including modern frameworks (e.g. React Angular ) for interactive web applications and data-driven UIs.

Additional Technical Skills

Familiarity with cloud platforms (AWS Azure GCP) for data services such as S3 Azure Data Factory or BigQuery.

Knowledge of version control (Git) CI / CD tools (Jenkins GitHub Actions) and container orchestration.

Understanding of data governance lineage tracking (e.g. Collibra) and security protocols (encryption access controls).

OTHER JOB REQUIREMENTS / WORKING CONDITIONS Sitting Constantly (More than 50% of the time)

Standing Frequently (25% - 50% of the time)

Walking Frequently (25% - 50% of the time)

Visual / Audio / Speaking Able to access and interpret client information received from the computer and able to hear and speak with individuals in person and on the phone.

Manual Dexterity / Keyboarding Able to work standard office equipment including PC keyboard and mouse copy / fax machines and printers.

Availability Able to work all hours scheduled including overtime as directed by manager / supervisor and required by business need.

Travel Minimal and up to 10%

General Description of Available Benefits for Eligible Employees of Truist Financial Corporation : All regular teammates (not temporary or contingent workers) working 20 hours or more per week are eligible for benefits though eligibility for specific benefits may be determined by the division of Truist offering the offers medical dental vision life insurance disability accidental death and dismemberment tax-preferred savings accounts and a 401k plan to teammates. Teammates also receive no less than 10 days of vacation (prorated based on date of hire and by full-time or part-time status) during their first year of employment along with 10 sick days (also prorated) and paid holidays. For more details on Truists generous benefit plans please visit our Benefits site. Depending on the position and division this job may also be eligible for Truists defined benefit pension plan restricted stock units and / or a deferred compensation plan. As you advance through the hiring process you will also learn more about the specific benefits available for any non-temporary position for which you apply based on full-time or part-time status position and division of work.

Truist is an Equal Opportunity Employer that does not discriminate on the basis of race gender color religion citizenship or national origin age sexual orientation gender identity disability veteran status or other classification protected by law. Truist is a Drug Free Workplace.

EEO is the Law E-Verify IER Right to Work

Key Skills Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala

#J-18808-Ljbffr