Logo
Intercontinental Exchange

Senior Data Intelligence Engineer

Intercontinental Exchange, Atlanta, Georgia, United States, 30383

Save Job

Overview

Job Purpose The Document and Data Automation team is seeking a Senior Data Intelligence Engineer to build and improve AI-driven systems that automate mortgage workflows. We develop document recognition, dataextraction and analytics solutions that shorten loan processing times from weeks to minutes, directly supporting ICE Mortgage Technology's mission to streamline homeownership. As a Senior Engineer, you will design scalable data and machine learning solutions, uncover complex data patterns, and collaborate across engineering, data science, and product teams to integrate GenAI capabilities that accelerate intelligent automation, enhance decision-making, and drive measurable improvements in accuracy and efficiency. Your Impact Accelerate intelligent automation by building scalable data pipelines, machine learning models, and GenAI solutions that transform mortgage documents and significantly improve efficiency and accuracy of the mortgage workflows. Enable data-driven decision-making through rigorous data analysis that enhances model performance, optimizes operational workflows, and facilitates measurable business outcomes. Drive innovation through collaboration by partnering with cross-functional teams to continuously evolve the technology stack and integrate cutting-edge GenAI capabilities. Directly influence customer experience by delivering solutions that significantly reduce loan processing times and increase reliability, directly benefiting homebuyers and the broader mortgage ecosystem. Responsibilities Develop Data Intelligence and ML solutions :

Design, develop, and deploy scalable data pipelines, ML models, and GenAI-powered solutions that automate document processing and data extraction with high accuracy and performance. Leverage GenAI building blocks - such as prompting techniques, agentic frameworks, vector databases, and Model Context Protocol (MCP) - to enrich data workflows and enable intelligent decision-making. Conduct comprehensive data analysis to identify actionable insights, inform strategic decisions, and continuously improve system performance.

Drive Innovation and Collaboration :

Continuously explore and adopt emerging GenAI technologies to enhance automation capabilities. Rapidly build and test new approaches to solve complex business problems and improve operational efficiency. Support junior engineers and peers in applying GenAI concepts effectively by leveraging AI-assisted coding for rapid prototyping and feature implementation, and by incorporating GenAI building blocks directly into solution architectures to accelerate learning and delivery.

Promote Data

Product Mindset: Align data pipeline design with end users' needs by defining clear outputs and business outcomes. Empower stakeholders to engage with data solutions as customers, fostering ownership and feedback-driven iteration.

Deliver and Optimize Impactful Results :

Continuously monitor deployed solutions to ensure optimal performance, accuracy, scalability, and reliability. Identify, troubleshoot, and resolve performance issues and data anomalies proactively. Track and communicate key performance metrics clearly and regularly to stakeholders.

Knowledge and Experience Bachelor's degree in data science , Engineering, Computer Science, or related field (master's preferred). 5+ years of software/data

engineering experience

with strong understanding of the full SDLC, CI/CD practices, and production-grade system design. Machine Learning and GenAI Expertise:

Solid understanding of statistics, data analysis, and Machine Learning fundamentals. Practical experience implementing and deploying

solutions using GenAI building blocks

such as prompting strategies, agentic frameworks, vector databases, RAG pipelines, reasoning models, and MCP. Hands-on experience with

NLP

and/or

CV .

Programming and Tooling:

Extensive

Python

programming experience (3+ years), with strong proficiency in libraries such as pandas, scikit-learn, Hugging Face Transformers, MLFlow and Jupyter. Proficiency in

SQL

and experience with data querying, transformation, and analytics tools. Experience with

big data platforms

such as Databricks, Snowflake, or similar.

Strong

analytical skills

with a demonstrated ability to perform complex data analysis. Development tools and CI/CD pipelines

-

git, Docker, Terraform, Jenkins. Communication and Collaboration

- excellent written and verbal communication skills, ability to document complex systems clearly and collaborate effectively across cross-functional teams. Preferred Proven experience using

AI-assisted development

tools such as Cursor and GitHub Copilot is a significant advantage. Practical experience with modern

GenAI frameworks and orchestration tools -

such as DSPy, LangChain, ADK, LlamaIndex, AWS Bedrock AgentCore, OpenAI Agents SDK, Adept ACT-2, Perplexity - is a significant advantage. Exposure to

cloud platforms and services

(AWS Sagemaker, S3, Lambda, EC2, Step Functions, ECS, or equivalent) Experience developing RESTful APIs and web services. #LI-MA1