Abbott
Overview
Join to apply for the
Senior Data Engineer
role at
Abbott . Abbott is a global healthcare leader that helps people live more fully at all stages of life. Our portfolio of life-changing technologies spans the spectrum of healthcare, with leading businesses and products in diagnostics, medical devices, nutritionals and branded generic medicines. Our 114,000 colleagues serve people in more than 160 countries. We’re focused on helping people with diabetes manage their health with life-changing products that provide accurate data to drive better-informed decisions and are revolutionizing the way people monitor their glucose levels with our new sensing technology. Working at Abbott
– At Abbott, You Can Do Work That Matters, Grow, And Learn, Care For Yourself And Family, Be Your True Self And Live a Full Life. You’ll Also Have Access To Career development with an international company where you can grow the career you dream of. Employees can qualify for free medical coverage in our Health Investment Plan (HIP) PPO medical plan in the next calendar year. An excellent retirement savings plan with high employer contribution. Tuition reimbursement, the Freedom 2 Save student debt program and FreeU education benefit – an affordable and convenient path to getting a bachelor’s degree. A company recognized as a great place to work in dozens of countries around the world and named one of the most admired companies in the world by Fortune. Recognition as one of the best big companies to work for as well as a best place to work for diversity, working mothers, female executives, and scientists. The Opportunity
This
Senior Data Engineer
position can work remotely within the
U.S. What You’ll Work On
Design and implement data pipelines to be processed and visualized across a variety of projects and initiatives Develop and maintain optimal data pipeline architecture by designing and implementing data ingestion solutions on AWS using native services Design and optimize data models on AWS Cloud using Databricks and AWS data stores such as Redshift, RDS, S3 Integrate and assemble large, complex data sets that meet a broad range of business requirements Read, extract, transform, stage and load data to selected tools and frameworks as required Customize and manage integration tools, databases, warehouses, and analytical systems Process unstructured data into a form suitable for analysis and assist in analysis of the processed data Work directly with technology and engineering teams to integrate data processing and business objectives Monitor and optimize data performance, uptime, and scale; maintain high standards of code quality and thoughtful design Create software architecture and design documentation for the supported solutions and overall best practices and patterns Support the team with technical planning, design, and code reviews including peer code reviews Provide architecture and technical knowledge training and support for the solution groups Develop good working relations with other solution teams and groups, such as Engineering, Marketing, Product, Test, QA Stay current with emerging trends, making recommendations as needed to help the organization innovate Qualifications
Bachelor's Degree in Computer Science, Information Technology or other relevant field At least 2 to 6 years of recent experience in Software Engineering, Data Engineering or Big Data Ability to work effectively within a team in a fast-paced changing environment Knowledge of or direct experience with Databricks and/or Spark Software development experience, ideally in Python, PySpark, Kafka or Go, and a willingness to learn new software development languages to meet goals and objectives Knowledge of strategies for processing large amounts of structured and unstructured data, including integrating data from multiple sources Knowledge of data cleaning, wrangling, visualization and reporting Ability to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations and experience Familiarity with databases, BI applications, data quality and performance tuning Excellent written, verbal and listening communication skills Comfortable working asynchronously with a distributed team Preferred
Knowledge of or direct experience with AWS services such as S3, RDS, Redshift, DynamoDB, EMR, Glue, and Lambda Experience working in an agile environment Practical knowledge of Linux Abbott is an Equal Opportunity Employer, committed to employee diversity. Apply via Abbott’s careers site and follow Abbott on social media for updates.
#J-18808-Ljbffr
Join to apply for the
Senior Data Engineer
role at
Abbott . Abbott is a global healthcare leader that helps people live more fully at all stages of life. Our portfolio of life-changing technologies spans the spectrum of healthcare, with leading businesses and products in diagnostics, medical devices, nutritionals and branded generic medicines. Our 114,000 colleagues serve people in more than 160 countries. We’re focused on helping people with diabetes manage their health with life-changing products that provide accurate data to drive better-informed decisions and are revolutionizing the way people monitor their glucose levels with our new sensing technology. Working at Abbott
– At Abbott, You Can Do Work That Matters, Grow, And Learn, Care For Yourself And Family, Be Your True Self And Live a Full Life. You’ll Also Have Access To Career development with an international company where you can grow the career you dream of. Employees can qualify for free medical coverage in our Health Investment Plan (HIP) PPO medical plan in the next calendar year. An excellent retirement savings plan with high employer contribution. Tuition reimbursement, the Freedom 2 Save student debt program and FreeU education benefit – an affordable and convenient path to getting a bachelor’s degree. A company recognized as a great place to work in dozens of countries around the world and named one of the most admired companies in the world by Fortune. Recognition as one of the best big companies to work for as well as a best place to work for diversity, working mothers, female executives, and scientists. The Opportunity
This
Senior Data Engineer
position can work remotely within the
U.S. What You’ll Work On
Design and implement data pipelines to be processed and visualized across a variety of projects and initiatives Develop and maintain optimal data pipeline architecture by designing and implementing data ingestion solutions on AWS using native services Design and optimize data models on AWS Cloud using Databricks and AWS data stores such as Redshift, RDS, S3 Integrate and assemble large, complex data sets that meet a broad range of business requirements Read, extract, transform, stage and load data to selected tools and frameworks as required Customize and manage integration tools, databases, warehouses, and analytical systems Process unstructured data into a form suitable for analysis and assist in analysis of the processed data Work directly with technology and engineering teams to integrate data processing and business objectives Monitor and optimize data performance, uptime, and scale; maintain high standards of code quality and thoughtful design Create software architecture and design documentation for the supported solutions and overall best practices and patterns Support the team with technical planning, design, and code reviews including peer code reviews Provide architecture and technical knowledge training and support for the solution groups Develop good working relations with other solution teams and groups, such as Engineering, Marketing, Product, Test, QA Stay current with emerging trends, making recommendations as needed to help the organization innovate Qualifications
Bachelor's Degree in Computer Science, Information Technology or other relevant field At least 2 to 6 years of recent experience in Software Engineering, Data Engineering or Big Data Ability to work effectively within a team in a fast-paced changing environment Knowledge of or direct experience with Databricks and/or Spark Software development experience, ideally in Python, PySpark, Kafka or Go, and a willingness to learn new software development languages to meet goals and objectives Knowledge of strategies for processing large amounts of structured and unstructured data, including integrating data from multiple sources Knowledge of data cleaning, wrangling, visualization and reporting Ability to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations and experience Familiarity with databases, BI applications, data quality and performance tuning Excellent written, verbal and listening communication skills Comfortable working asynchronously with a distributed team Preferred
Knowledge of or direct experience with AWS services such as S3, RDS, Redshift, DynamoDB, EMR, Glue, and Lambda Experience working in an agile environment Practical knowledge of Linux Abbott is an Equal Opportunity Employer, committed to employee diversity. Apply via Abbott’s careers site and follow Abbott on social media for updates.
#J-18808-Ljbffr