Engineer II – Data Engineer

Palo Alto

Saturday, 25 April 2026

Design, and build scalable, resilient data pipelines (orchestration, transformation, delivery) that support analytics and downstream products. Use modern developer tooling effectively, including AI-assisted coding (e.g., Cursor, GitHub Copilot) to accelerate delivery while maintaining code review, testing, and governance (no secrets in prompts or code, repo-aligned patterns). Engage in cross-functional collaboration across the full data lifecycle with analysts, platform engineers, and product partners from requirements through production support. Participate in design sessions and code reviews with peers to improve correctness, performance, security, and operability of data systems. Define, create, and support reusable pipeline patterns and standards (e.g., layering, testing, incremental design, naming, documentation) from both business and technology perspectives. Leverage AI models to create SQL and Python, dbt (models, tests, macros, incremental strategies), Apache Airflow (DA - Gs, dependencies, backfill/retry patterns), cloud data warehouse platforms (e.g., Snowflake), and related integration patterns; then leverage their expertise to review and improve code quality. Execute delivery using an Agile methodology, continuous integration/continuous delivery, Infrastructure as Code where applicable, scripting for automation, platform consoles for warehouse and orchestration, and observability tooling (logging, metrics, alerting—for example dashboards and APM where used). Build pipeline definitions and apply strong technical judgment to choose and implement solutions that balance latency, cost, freshness, and reliability. Share best practices and improve processes within and across teams. Qualifications:Strong hands-on experience with SQL, dbt and Python for data transformation and pipeline automation. Proven understanding of data pipeline architecture (batch workflows, idempotency, data quality, error handling, backfills) and how pipelines interface with a warehouse-centric analytics stack. Experience contributing to the architecture and design of data systems (layering, modeling patterns, reliability, scaling, cost awareness). Working knowledge of structured data interchange (e.g., JSON, XML/ CSV as sources), APIs, and file-based ingestion patterns as used in analytics pipelines. Solid grounding in computer science fundamentals (e.g., complexity, joins, partitioning concepts) applied to data processing. Experience with Git tools and standard branching/review workflows. Familiarity with cloud data and orchestration services (e.g., Snowflake and managed Airflow or equivalent). Experience with continuous delivery and Infrastructure as Code for pipeline repos or supporting infrastructure. Strong oral and written communication skills. Strong problem-solving and debugging skills across SQL, logs, and orchestration failures. Practical experience working in an Agile environment. Ability to deliver in a fast-paced, priority-driven setting. Knowledge of developer tooling across the SDLC (task management, source control, build/deploy, operations, collaboration tools) including AI-assisted ID - Es used responsibly alongside dbt and Airflow workflows. Experience:2 years of non-internship professional experience in data engineering, software engineering with a data focus, or equivalent . years contributing to design and architecture of data pipelines or analytics data products (models, DA - Gs, warehouse objects).2 years building and operating ETL/ ELT or transformation-heavy systems using SQL-centric tooling (required: dbt or equivalent transform discipline; Airflow or comparable orchestration).2 years with AWS, GCP, Azure, or comparable cloud platforms in a data or backend context. Education:Bachelor’s degree in Computer Science, Information Systems, or equivalent education or work experience Annual Salary$75,000.00 - $260,000.00

apply
 
Loading Similar Jobs...
JOBZ is an independent Job Search Engine. JOBZ is not an agent or representative and is not endorsed, sponsored or affiliated with any employer. JOBZ uses proprietary technology to keep the availability and accuracy of its job listings and their details. All trademarks, service marks, logos, domain names, job descriptions and other company descriptions / details are the property of their respective holder. JOBZ does not have its users apply for a job on the J-O-B-Z.com website. Additionally, JOBZ may provide a list of third-party job listings that may not be affiliated with any employer. Please make sure you understand and agree to the website's Terms & Conditions and Privacy Policies you are applying on as they may differ from ours and are not in our control.