at Koch Industries, Inc in Wichita, Kansas, United States
Job Description
Your Job
Koch Engineered Solutions (KES) is currently looking for a Data Engineer to work in our Information Technology (IT) Team at our Wichita, KS headquarters. The IT team is a vital component in KES' strategy to improve business performance through the application of technology to profitably transform our business. Working as an integrated group with the Engineering, Operations, Commercial and Financial teams that design and maintain facilities. Our team functions as a central capability within our enterprise to develop innovative solutions to transform KES work processes.
Our Team
As a member of the Information Technology team, you will need to thrive in a fast pace and innovative environment. You will collaborate to develop solutions and prove their value through experimentation and scalable deployment in our business. Collaboration, creativity, and focus on attaining positive business results will be necessary skills. Like an employee of a startup, you will need to be resourceful and capable of partnering with market solution providers that can accelerate progress of our business objectives.
What You Will Do
Own data products end-to-end - from understanding the business problem, through source system assessment and data modeling, to production pipeline implementation in dbt and Snowflake
Build and maintain ELT pipelines following established engineering patterns (staging, intermediate, mart layers) with automated testing, validation, and documentation that holds up in production
Engage directly with business stakeholders - cost accountants, project managers, analysts - to understand their processes, validate business logic, and ensure your data models accurately represent the real world
Integrate data from diverse and sometimes messy source systems, including multiple ERPs, APIs, file-based feeds, and cloud services, making deliberate decisions about entity resolution and cross-system reconciliation
Structure data products to be AI-ready - with semantic clarity, consistent naming, documented lineage, and quality sufficient for both human analysts and AI/LLM consumption
Use AI tools as a natural part of your engineering workflow (Claude Code, Snowflake Cortex, and others) to accelerate development, improve code quality, and evaluate where AI capabilities can be embedded into the data products you build
Develop and maintain data quality frameworks - not just running tests, but deciding what to test, identifying where upstream business processes create risk, and building detection that catches problems before consumers do
Contribute to platform reliability by building monitoring, alerting, and automation into your work rather than treating it as someone else's problem
Participate in legacy platform migration, transitioning workloads from older tools to the modern stack while preserving data integrity and business continuity
Share knowledge through code review, documentation, and pairing - building shared capability so the team isn't dependent on any single person for any single domain
Successful candidates will:
Show strong ownership
Internalize business context
Keep AI front of mind
Communicate clearly
Be an economic thinker
Who You Are (Basic Qualifications)
3+ years of hands-on experience building data pipelines in a cloud data warehouse environment
Strong SQL skills and practical experience with dbt (or equivalent transformation framework using version control, testing, and modular design)
Experience with Snowflake or comparable cloud data platform (Redshift, BigQuery, Databricks)
Experience working with cloud infrastructure services, particularly AWS (S3, Glue, Lambda)
Experience with dimensional data modeling and translating business processes into data structures
Working knowledge of CI/CD and version control (Git, Azure DevOps, or similar)
Demonstrated experience engaging directly with business stakeholders to understand requirements and validate data accuracy - not just building from pre-written specifications
Active use of AI coding assistants (Claude Code, GitHub Copilot, or similar) in your current engineering work
What Will Put You Ahead
Experience in multi-ERP or multi-source-system environments where entity resolution and cross-system reconciliation were part of the daily work
Python skills applied to data engineering automation and pipeline orchestration
Experience designing data structures with AI/ML consumption in mind (semantic modeling, feature engineering, LLM-ready schemas)
Experience with Power BI data modeling or semantic layer design
Experience migrating workloads from legacy platforms to modern cloud-native stacks
Experience building lightweight data applications (Streamlit or similar)
Background in manufacturing, engineering services, or EPC industries
Familiarity with Agile/Scrum delivery... For full info follow application link.
Equal Opportunity Employer, including disability and protected veteran status.
To view full details and how to apply, please login or create a Job Seeker account