Backend Engineer, Optimized Checkout & Link Data Engineering
Posted 2025-04-06Stripe is a financial infrastructure platform for businesses. Millions of companiesÂfrom the worldÂs largest enterprises to the most ambitious startupsÂuse Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to... put the global economy within everyoneÂs reach while doing the most important work of your career.
The Optimized Checkout & Link team at Stripe builds best-in-class checkout experiences across web and mobile that delight consumers and streamline checkout flows for merchants. Based across North America, we're a diverse team who are deeply passionate about redefining the payment experience creating outstanding value for merchants, increasing revenue, lowering cost and growing their business. We work on Checkout, Payment Links, Elements, Payment Methods, and Link  each playing a crucial part in augmenting the economic landscape of the internet. Our days are filled with exciting challenges and collaborative problem-solving as we strive to simplify payment options, create unique business solutions and enhance checkout ease. Join us in crafting the future of digital commerce.
WeÂre looking for people with a strong background in data engineering and analytics to help us scale while maintaining correct and complete data.
 Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
 Be an advocate for data quality and excellence of our platform.
 Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve
 Gather requirements, understand the big picture, create detailed proposals in technical specification documents.
 Productizing data ingestion from various sources, data delivery to various destinations, and creating well-orchestrated data pipelines.
 Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts
 Conduct SQL data investigations, data quality analysis and optimizations.
 Contribute in peer code reviews, and help the team produce high quality code.
 Mentor team members by giving/receiving actionable feedback
WeÂre looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.
 Bachelor's degree in Computer Science or Engineering MasterÂs degree is preferred.
 Have a strong engineering background and are interested in data
 5+ years of experience with writing and debugging data pipelines using a distributed data framework (Hadoop/Spark/Pig etcÂ
)
 Great data modeling skills, database design, relational/non-relational.
 Very strong SQL proficiency, and preferably SQL query optimization experience.
 Strong coding skills in Scala or Java preferably for building performance data pipelines.
 Strong understanding and practical experience with systems such as Hadoop, Spark, Presto, Iceberg, and Airflow
 Versed in software production engineering practices, version control, code peer reviews, automated testing, and CI/CD.
 Excellent communication skills.
 Experience in AWS cloud is preferred
Apply Job!