Finance Staff Data Engineer, AI Native
Not sure if you're a good fit?
Upload your resume and TixelJobs AI will compare it against Finance Staff Data Engineer, AI Native at Life360. Get a match score, missing keywords, and improvement tips before you apply.
Free preview · Your resume stays private
About the Role
About Life360
Life360’s mission is to keep people close to the ones they love. Our category-leading mobile app, Tile tracking devices, and Pet GPS tracker empower members to protect the people, pets, and things they care about most with a range of services, including location sharing, safe driver reports, and crash detection with emergency dispatch. Life360 serves approximately 91.6 million monthly active users (MAU), as of September 30, 2025, across more than 180 countries.
Life360 delivers peace of mind and enhances everyday family life with seamless coordination for all the moments that matter, big and small. By continuing to innovate and deliver for our customers, we have become a household name and the must-have mobile-based membership for families (and those friends who are basically family).
Life360 has more than 500 (and growing!) remote-first employees. For more information, please visit life360.com.
Life360 is a Remote-First company, which means a remote work environment will be the primary experience for all employees. All positions, unless otherwise specified, can be performed remotely (within the US) regardless of any specified location above.
We are AI Native
We are building an AI native company where AI is an integral part of how we build and operate. AI tool usage during interviews varies by role. You may be asked to demonstrate proficiency with AI tools, discuss how you leverage AI, or complete interview exercises without AI assistance. Your Recruiter will provide clear guidance as you move through the interview process.
Undisclosed use of AI not previously discussed with or approved by your Recruiter may impact your candidacy.
About The Team
The Finance Data Team sits at the intersection of Finance & Accounting teams and Life360’s data. We provide the data ingestion / processing / reporting / egress needed by our partner teams in Finance & Accounting to enable their work and ensure SOX compliance with rigor. We push the envelope on how work is done through implementation of AI tools and capabilities to enhance our own pace of development and capabilities that we deliver to our stakeholders.
We are hiring a bar-raising Staff Data Engineer to drive our ingress / egress capabilities and build cross-cutting capabilities to drive developer experience and security. This role requires someone who can step into ambiguity, make sound architectural decisions, eliminate operational fragility, and establish an engineering discipline that others adopt.
You will serve as a technical reference point - shaping standards, influencing cross-team architecture, and driving initiatives to clear, production-ready outcomes. We value engineers who are direct, collaborative, and proactive in surfacing risks early, while helping build a team culture where high standards and psychological safety coexist.
About the Job
The Life360 Finance Data Team works as the integrator for numerous systems - bringing data into the Finance Data Warehouse, transforming it, and pushing it to its relevant destinations (reporting, data asset deliverables, tools, etc). To support our role we are continuously building and enhancing our system - adding new data, new transformations, and new tooling to improve developer velocity and ‘buy down’ overhead costs associated with maintaining our system.
As a Staff Data Engineer you will drive forward the:
- Data ingestion suite - a mix of capabilities ranging from Fivetran to completely custom connectors that bring data into our warehouse and monitor for data quality.
- Data transformation suite - enhancing our dbt environment (dbt core) and the suite of tools that we leverage to build data models, semantic models, and other interfaces.
- Data egress suite - a mix of capabilities that allow us to push data to its end destinations.
- CI/CD pipelines and other tools / capabilities to enhance the developer experience and velocity.
- Infrastructure and networking behind our warehouse and related connectors.
- Databricks configuration and capabilities.
- Security posture and access controls.
The ideal candidate:
- They have spent years building out data platforms / infrastructure as well as creating ingress / egress data frameworks that are used in pipelines. They have tackled numerous challenges and found novel solutions to problems for data ingestion, processing, and egress.
- They have learned how to leverage LLMs for development velocity and analytics - not just asking it to write the code but leveraging the tools to support their development under clear guidance and accepts ownership of the work produced as their own.
- They have learned to think about scalability / velocity / experience of future development and not just shipping the current project.
- They are part software engineer, part data integration engineer, and part data platform engineer.
- They adhere to the controls / procedures / separation of duties necessary to maintain our SOX compliance.
We are looking for someone with strong engineering depth who demonstrates ownership, decisiveness, and the ability to elevate both the system and the team around them.
For candidates based in the US, the salary range for this position is $190,000 to $280,500 USD. For candidates based out of Canada, the salary range for this position is $220,000 to $260,000 CAD. We take into consideration an individual's background and experience in determining final salary; therefore, base pay offered may vary considerably depending on geographic location, job-related knowledge, skills, and experience. The compensation package includes a wide range of medical, dental, vision, financial, and other benefits, as well as equity.
AI / LLM Usage
The Finance Data Team leverages LLM’s to support code generation, analysis, and other use cases.
- Cursor / Claude Code.
- Other tools (Wispr Flow, Claude & Claude Cowork, Gemini / Codex).
Your experience with AI / LLM usage should include managing code generation with a close eye on quality, standards, and testing - owning the outputs as your own. Your work with and ability to leverage these tools will drive your velocity and ability to effectively work within our environment.
What You’ll Do
- Architect and evolve scalable data ingestion and egress frameworks and pipelines that are well tested and offer strong data quality monitoring.
- Architect and evolve our CI/CD processes - enhancing the testing environment and observability (such as building LLM-driven reviews with context awareness through data diffing, lineage analysis / downstream impact analysis, and general context).
- Architect delivery architecture of data assets to external team partners to reduce manual operational overhead associated with month end close.
- Enhance our Claude Code / LLM development support capabilities - creating tools / skills / agents that give our LLMs more context and help us continually improve their abilities to debug, create code, and maintain systems.
- Enhance our security posture in our AWS / Databricks environment.
- Design and implement distributed data processing systems using Spark and Databricks on AWS.
- Establish clear ingestion and integration boundaries that eliminate single points of failure.
- Proactively surface risks, dependencies, and tradeoffs before they impact delivery.
- Produce clear technical artifacts and recommendations for stakeholders and leadership.
- Design logical and physical data models balancing flexibility, performance, governance, and scalability.
- Partner closely with the Analytics Engineers on the Finance Data Team to support high-quality downstream data modeling & reporting.
- Harden pipelines with monitoring, alerting, SLAs, and recovery mechanisms.
- Mentor engineers and elevate distributed systems rigor across the team.
What We’re Looking For
- 8+ years designing and operating high-volume distributed data systems in production.
- Deep expertise with a cloud data platform (Databricks strongly preferred) and AWS from an infrastructure / services architecture, deployment, and ownership perspective.
- Strong proficiency in Python, SQL, and Spark for large-scale processing.
- Strong proficiency with modern CI/CD practices (creating GitHub Actions, writing Terraform code to manage infrastructure in Databricks / Airflow / AWS / and others).
- Hands-on experience with dbt from an infrastructure / deployment perspective and understanding of how platform decisions impact downstream modeling.
- Strong gra
Ready to apply?
This job is active. Apply now to get in early.